Tree-adjoining grammar (TAG) and combinatory categorial grammar (CCG) are two well-established mildly context-sensitive grammar formalisms that are known to have the same expressive power on strings (i.e., generate the same class of string languages). It is demonstrated that their expressive power on trees also essentially coincides. In fact, CCG without lexicon entries for the empty string and only first-order rules of degree at most 2 are sufficient for its full expressive power.
1 Introduction
Combinatory categorial grammar (CCG) (Steedman, 2000; Steedman and Baldridge, 2011) is one of several grammar formalisms that were introduced as an extension of context-free grammars. In particular, CCG extends the classical categorial grammar (Bar-Hillel et al., 1960), which has the same expressivity as context-free grammar, by rules that are inspired by combinatory logic (Curry et al., 1958). CCG is a mildly context-sensitive grammar formalism (Joshi, 1985). Context-sensitive grammar formalisms are formalisms that are efficiently parsable (i.e., in polynomial time) and have expressivity beyond the context-free languages. They are able to express a limited amount of cross-serial dependencies and have the constant growth property. Because of these features and its notion of syntactic categories, which is quite intuitive for natural languages, CCG has become widely applied in compuational linguistics (Steedman, 2000). Further, it can be enhanced by semantics through the lambda calculus.
CCG is based on a lexicon and a rule system. The lexicon assigns syntactic categories to the symbols of an input string and the rule system describes how neighboring categories can be combined to new categories. Each category has a target, which is similar to the return type of a function, and optionally, a number of arguments. Different from functions, each argument has a directionality that indicates if it is expected on the left or the right side. If repeated combination of categories leads to a (binary) derivation tree that comprises all input symbols and is rooted in an initial category, then the input string is accepted.
When defining CCG, there are many degrees of freedom yielding a number of different variants (Steedman, 2000; Baldridge, 2002; Steedman and Baldridge, 2011; Kuhlmann et al., 2015). This is a consequence of the linguistically motivated need to easily express specific structures that have been identified in a particular theory of syntax for a given natural language. However, we and others (Kuhlmann et al., 2015) are interested in the expressive power of CCGs as generators of formal languages, since this allows us to disentangle the confusion of subtly different formalisms and identify the principal structures expressible by a common core of the formalisms. As linguistic structure calls for a representation that goes beyond strings, we aim for a characterization of expressive power in terms of the generated trees.
The most famous result on the expressive power of CCG is by Vijay-Shanker and Weir (1994), showing that tree-adjoining grammar (TAG), linear-indexed grammar (LIG), head grammar (HG), and CCG generate the same string languages. An equivalent automaton model is the embedded push-down automaton (Vijay-Shanker, 1988). In the definition of CCG used by Vijay-Shanker and Weir (1994), the lexicon allows ε-entries, which assign syntactic categories to the empty string ε. Their rule system restricts rules to specific categories and limits the rule degree. CCG with unbounded rule degree are Turing-complete (Kuhlmann et al., 2018). Prefix-closed CCG without target restrictions, in which the rules obey special closure properties, are less powerful. This even holds for multimodal CCGs (Kuhlmann et al., 2010, 2015), which allow many types of directionality indicators (i.e., slashes).
When going beyond the level of string languages, there exist different notions of strong generative power. We consider two formalisms as strongly equivalent if their generated derivation tree languages coincide modulo relabelings. For example, the well-known local and regular tree grammars (Gécseg and Steinby, 1997) are strongly equivalent. On the other hand, Hockenmaier and Young (2008) regard two formalisms as strongly equivalent if they capture the same sets of dependencies. Then there exist specific scrambling cases whose dependencies can be expressed by their CCG, but not by Lexicalized TAG (LTAG). Their CCG are syntactically more expressive than ours and allow type-raising, whereas the strong generative capacity (in our sense) of LTAG is strictly smaller than that of TAG (Kuhlmann and Satta, 2012). The dependencies expressed by CCG without rule restrictions and TAG are shown to be incomparable by Koller and Kuhlmann (2009).
Returning to our notion of strong generative capacity, Kuhlmann et al. (2019) investigated the tree-generative capacity of CCG without ε-entries. The generated trees are always binary. CCG with application and first-degree composition rules generate exactly the regular tree languages (Gécseg and Steinby, 1997). Without the composition rules, only a proper subset can be generated. The languages of CCG rule trees (i.e., trees labeled by applied rules instead of categories) with bounded rule degree can also be generated by simple monadic context-free tree grammar (sCFTG).
For the converse direction, we show that the tree languages generated by sCFTG can also be generated by CCG, which shows strong equivalence. This answers several open questions. Since sCFTG and TAG are strongly equivalent (Kepser and Rogers, 2011), our result also shows strong equivalence of CCG and TAG. In contrast to the construction of Vijay-Shanker and Weir (1994), which relies heavily on ε-entries, our construction avoids them and shows that they do not increase the expressive power of CCG. Additionally, we only use rules up to degree 2 and first-order categories (i.e., arguments are atomic), which shows that larger rule degree or higher-order categories do not increase the expressive power.
Our construction proceeds roughly as follows. We begin with a spine grammar, which is a variant of sCFTG that is also strongly equivalent to TAG. We encode its spines using a context-free grammar, which in turn can be represented by a special variant of push-down automata. Finally, the runs of the push-down automaton are simulated by a CCG such that the stack operations of the automaton are realized by adding and removing arguments of the categories.
2 Preliminaries
The nonnegative integers are ℕ. For every k ∈ℕ, we let [k] = {i ∈ℕ∣1 ≤ i ≤ k}. The set Σ* contains all strings over the finite set Σ including the empty string ε. We let Σ + = Σ*∖{ε}. The length of w ∈ Σ* is , and concatenation is written as juxtaposition. The prefixes Pref(w) of a string w ∈ Σ* are {u ∈ Σ*∣∃v ∈ Σ*: w = uv}. A string language is a subset L ⊆ Σ*. Given a relation ⇒⊆ S2, we let ⇒* be the reflexive, transitive closure of ⇒.
2.1 Tree Languages
The special leaf symbol is reserved and represents a hole in a tree. The set of contexts contains all trees of , in which occurs exactly once. We write to denote the unique position of in the context . Moreover, given t ∈ T we simply write C[t] instead of .
A tuple (ρ0,ρ1,ρ2) is called a relabeling if ρk: Σk → Δk for all k ∈{0,1,2} and ranked set Δ. It induces the map given by for all k ∈{0,1,2}, c ∈ Σk and t1,…,tk ∈ T.
2.2 Combinatory Categorial Grammar
In the following, we give a short introduction to CCG. Given an alphabet A of atoms or atomic categories and a set of slashesD = {/,∖} indicating directionality, the set of categories is defined as . We usually write the categories in infix notation and the slashes are left-associative by convention, so each category takes the form c = a |1c1 ⋯ |kck where a ∈ A, |i ∈ D, for all i ∈{1,…,k}. The atom a is called the target of c and written as tar(c). The slash-category pairs |ici are called arguments and their number k is called the arity of c and denoted by ar(c). In addition, we write arg(c,i) to get the i-th argument |ici of c. In the sense of trees, the sequence of arguments is a context . The set of argument contexts is denoted by . We distinguish between two types of categories. In first-order categories, all arguments are atomic, whereas in higher-order categories, the arguments can have arguments themselves.
A derivation tree of is depicted in Figure 1. We start at the bottom with categories taken from the lexicon in accordance with the input symbols. Then neighboring categories are combined until we arrive at the root with initial category ⊥, so the input word is accepted.
3 Push-down Automata
We start by introducing a Moore variant of push-down automata (Autebert et al., 1997) that is geared towards our needs and still accepts the context-free languages (of strings of length ≥ 2). It will be similar to the push-down Moore machines of Decker et al. (2013). Instead of processing input symbols as part of transitions (as in Mealy machines), Moore machines output a unique input symbol in each state (Fleischner, 1977). For every set Γ, we let Γ≤1 = {ε}∪ Γ and Γ≥2 = {w ∈ Γ*∣2 ≤|w|} be the sets of strings over Γ of length at most 1 and at least 2, respectively.
A Moore push-down automaton (MPDA) consists of (i) finite sets Q, Σ, and Γ of states, input symbols, and stack symbols, respectively, (ii) a set δ ⊆ (Q × Γ≤1 × Γ≤1 × Q) ∖ (Q × Γ × Γ × Q) of transitions, (iii) an output function τ : Q → Σ, and (iv) sets I,F ⊆ Q of initial and final states, respectively.
MPDA accept the context-free languages of strings of length at least 2.
The MPDA is pop-normalized if there exists a map pop: Γ → Q such that q′ = pop(γ) for every transition (q,γ,ε,q′) ∈ δ. In other words, for each stack symbol γ ∈ Γ there is a unique state pop(γ) that the MPDA enters whenever γ is popped from the stack.
Later on, we will simulate the runs of an MPDA in a CCG such that subsequent configurations are represented by subsequent primary categories. Popping transitions are modeled by removing the last argument of a category. Thus, the target state has to be stored in the previous argument. This argument is added when the according pushing transition is simulated, so at that point we already have to be aware in which state the MPDA will end up after popping the symbol again. This will be explained in more detail in Section 7.
We can easily establish this property by storing a state in each stack symbol. Each pushing transition is replaced by one variant for each state (i.e., we guess a state when pushing), but when a symbol is popped, this is only allowed if the state stored in it coincides with the target state.
For every MPDA we can construct an equivalent pop-normalized MPDA.
The next statement shows that we can provide a form of look-ahead on the output. In each new symbol we store the current as well as the next output symbol. Standard techniques can be used to prove the statement. We will briefly sketch why this look-ahead is necessary. Before constructing the CCG, the MPDA will be used to model a spine grammar. The next output symbol of the MPDA corresponds to the label of the parent node along a so-called spine of a tree generated by the spine grammar. From this parent node we can determine the possible labels of its other child. This information will be used in the CCG to control which secondary categories are allowed as neighboring combination partners.
For every context-free languageL ⊆ Σ≥2there exists a pop-normalized MPDA such that.
4 Spine Grammars
Now we move on to representations of tree languages. We first recall context-free tree grammars (Rounds, 1969), but only the monadic simple variant (Kepser and Rogers, 2011).
A simple monadic context-free tree grammar (sCFTG) is a tuple consisting of (i) disjoint ranked alphabets N and Σ of nonterminal and terminal symbols with N = N1 ∪ N0 and Σ1 = ∅, (ii) a nullary start nonterminalS ∈ N0, and (iii) a finite set P ⊆ P0 ∪ P1 of productions, where and .
Spine grammars (Fujiyoshi and Kasai, 2000) are a restriction on simple monadic context-free tree grammars that remain equally expressive by Lemma 5.4 of Fujiyoshi and Kasai (2000) modulo relabelings. Let us clarify this result. Clearly, each spine grammar is itself an sCFTG and for each sCFTG there exists a spine grammar and a relabeling ρ such that . Although sCFTGs are more established, we elect to utilize spine grammars because of their essential notion of spines.
The sCFTG is a spine grammar if there exists a map d: Σ2 →{1,2} such that with i = d(C(w)) for every production (n → C) ∈ P with n ∈ N1 and with C(w) ∈ Σ2.
Henceforth let be a spine grammar with map d: Σ2 →{1,2}. Consider a production(n → C) ∈ P with n ∈ N1. The spine of C is simply the path from the root of C to the unique occurrence of . The special feature of a spine grammar is that the symbols along the spine indicate exactly in which direction the spine continues. Since only the binary terminal symbols offer branching, the special feature of spine grammars is the existence of a map d that tells us for each binary terminal symbol σ ∈ Σ2 whether the spine continues to the left, in which case d(σ) = 1, or to the right, in which case d(σ) = 2. This map d, called spine direction, applies to all instances of σ in all productions with spines. In the original definition of spine grammars (Fujiyoshi and Kasai, 2000, Definition 3.2), only nonterminal symbols have a spine direction. By creating copies of binary terminal symbols we can show that both variants are equivalent modulo relabelings.
Spine grammar is in normal form if each (n → r) ∈ P is of the form (i) start:r = b(α) or r = α for some b ∈ N1 and α ∈ Σ0, (ii) chain: for some b1,b2 ∈ N1, or (iii) terminal: or for some σ ∈ Σ2 and a ∈ N0 ∖ S.
The following result is a variant of Theorem 1 of Fujiyoshi and Kasai (2000).
For every spine grammar there is a strongly equivalent normalized spine grammar.
5 Tree-adjoining Grammars
Before we proceed we will briefly introduce TAG and sketch how a spine grammar is obtained from it. TAG is a mildly context-sensitive grammar formalism that operates on a set of elementary trees of which a subset is initial. To generate a tree, we start with an initial tree and successively splice elementary trees into nodes using adjunction operations. In an adjunction, we select a node, insert a new tree there, and reinsert the original subtree below the selected node at the distinguished and specially marked foot node of the inserted tree. We use the non-strict variant of TAG, in which the root and foot labels of the inserted tree need not coincide with the label of the replaced node to perform an adjunction. To control at which nodes adjunction is allowed, each node is equipped with two types of constraints. The selective adjunction constraint specifies a set of trees that can be adjoined and the Boolean obligatory adjunction constraint specifies whether adjunction is mandatory. Only trees without obligatory adjunction constraints are part of the generated tree language.
Figure 4 shows the elementary trees of an example TAG. Only tree 1 is initial and foot nodes are marked by a superscript asterisk ⋅* on the label. Whenever adjunction is forbidden (i.e., empty set as selective adjunction constraint and non-obligatory adjunction), we omit the constraints altogether. Otherwise, the constraints are put next to the label. For example, {2,3} + indicates that tree 2 or 3 must ( + = obligatory) be adjoined.
We briefly sketch the transformation from TAG to sCFTG by Kepser and Rogers (2011). TAG is a notational variant of footed simple CFTG, in which all variables in right-hand sides of productions appear in order directly below a designated foot node. To obtain an sCFTG, the footed simple CFTG is first converted into a spine grammar, where the spine is the path from the root to the foot node, and then brought into normal form using the construction of Fujiyoshi and Kasai (2000). The spine grammar of Example 11 is strongly equivalent to the TAG shown in Figure 4.
6 Decomposition into Spines
We proceed with the construction starting from the normalized spine grammar . First, we will construct a context-free grammar (CFG) that captures all information of . It represents the spinal trees (from bottom to top) as strings and enriches the symbols with the spine generator (initialized by start productions and preserved by chain productions) and a non-spinal child (given by terminal productions). The order of these annotations depends on the spine direction of the symbol. The leftmost symbol of the generated strings has only a spine generator annotated since the bottom of the spine has no children. To simplify the notation, we write ng for (n,g) ∈ N2, αn for (α,n) ∈ Σ0 × N, and for (σ,n1,n2) ∈ Σ2 × N2.
Note that each string generated by the CFG belongs to (Σ0 × N)(Σ2×N2)*. Next we define how to reassemble those spines to form trees again, which then relabel to the original trees generated by . The operation given in the following definition describes how a string generated by the CFG can be transformed into a tree by attaching subtrees in the non-spinal direction of each symbol, whereby the non-spinal child annotation of the symbol and the spinal annotation of the root of the attached tree have to match.
To obtain the tree language defined by , it is necessary to apply this operation recursively on the set of spines.
Let L ⊆ (Σ0 × N)(Σ2×N2)*. We inductively define the tree language generated by L to be the smallest tree language such that for every w ∈ L.
The CFG of Example 13 generates the set of spines and contains the correctly assembled trees formed from these spines. Figure 3c shows a tree of since the generator of the main spine is S = s, which is stored in spinal direction in the root label . We can observe the correspondence of annotations in non-spinal direction and the spine generator of the respective child in the same direction.
Next we prove that and coincide modulo relabeling. This shows that the context-free language of spines completely describes the tree language generated by .
Letbe normalized. Then, where the relabelingπ: (Σ0 × N) ∪ (Σ2 × N2) → Σ0 ∪ Σ2is given byπ(αn) = α and for allα ∈ Σ0, σ ∈ Σ2, and n,n1,n2 ∈ N.
There exists a pop-normalized MPDAsuch that, where . Moreover, and coincide modulo relabeling.
The MPDA constructed in Corollary 18 for the spine grammar of Example 11 is depicted in Figure 5. Initial states are indicated using a start marker and final states are marked by a double circle. Pushing and popping stack operations are written with downwards and upwards arrows, respectively. The MPDA consists of two components. The bigger one describes the main spine, and the smaller one describes the side spine. The distinction between the three stack symbols is necessary due to pop-normalization. The distinction between q1 and (and similar states) is necessary because their previous action distinguishes their produced input symbol since we recognize . For example, and . Similarly, τ(p1) = (z,z) and where . To completely capture the behavior of , we additionally require the set , which contains the spines of length 1.
7 Constructing the CCG
In this section, let be a normalized spine grammar with spine direction d: Σ →{1,2} and the pop-normalized MPDA constructed in Corollary 18 with pop: Γ → Q. We note that Δ = Σ′× Σ′′ with Σ′ = {⊲}∪ (Σ2 × N2) as well as Σ′′ = (Σ0 × N) ∪ (Σ2 × N2). Moreover, let ⊥∉Q be a special symbol. To provide better access to the components of the MPDA , we define some additional maps.
The spine generator gen: Q → N is given for every state q ∈ Q by gen(q) = gen(s2), where τ(q) = (s1,s2) ∈ Δ. Since cannot accept strings of length 1, we have to treat them separately. Let and gen: L1 → N be given by gen(w) = n for all w = (⊲,αn) ∈ L1. We extend τ : Q → Δ to τ′′: (Q ∪ L1) → Δ by τ′(q) = τ(q) for all q ∈ Q and τ′(a) = a for short strings a ∈ L1.
Recall that D = {/,∖}. The slash type slash: (Q ∖ F) → D and combining nonterminal comb: (Q ∖ F) ∪{⊥}→ N of a state q ∈ Q ∖ F tell whether the symbol τ(q) generated by state q occurs as the first or second child of its parent symbol and with which spine generator it is combined. Let with and s2 ∈ Σ′′. The slash type and the combining nonterminal can be determined from the next symbol . Formally, slash(q) =/ if d(σ) = 1 and slash(q) = ∖ otherwise. In addition, comb(q) = n3−d(σ) and comb(⊥) = S.
We simulate the accepting runs of in the spines consisting of primary categories of the CCG. The main idea is that the primary categories on the spine store the current configuration of . This is achieved by adding an additional argument for transitions that push a symbol, whereas for each popping transition, an argument is removed. The rightmost argument stores the current state in the first component and the top of the stack in the second component. The previous arguments store the preceding stack symbols in their second components and the state the automaton returns to when the stack symbol stored in the next argument is popped in the first components. To implement the required transformations of consecutive primary categories, the secondary categories need to have a specific structure. This mandates that the categories at the top of a spine (which act as secondary categories unless they belong to the main spine) cannot store their corresponding automaton state in the first component of the last argument as usual, but instead utilize the third component of their target. Thus each argument stores the final state corresponding to its secondary combination partner in the third component. This third component also allows us to decide whether a category is primary: A category is a primary category if and only if the spine generator of the state stored in the first component of the last argument and the spine generator of the state stored in the last component of the target coincide. This is possible since is normalized, which yields that attaching spines have a spine generator that is different from the spine generator of the spine that they attach to.
We define the CCG as follows:
Let A = {(q,γ,f) ∈ A′∣gen(f) = comb(q)} with A′ = (Q ∪{⊥}) × Γ × (F ∪ L1). We use ai to refer to the i-th component of an atom a ∈ A. Additionally, let I′ = {(⊥,ε,f) ∈ A∣gen(f) = S}.
Each atom of A consists of three components. The first component stores the current state of (or the special symbol ⊥), the second component stores the current symbol at the top of the stack, and the third component stores the final state corresponding to the combining category of the attaching side spine. With this intuition, the rule system directly implements the transitions of .
The lexicon assigns categories to symbols that can label leaves, so these symbols are taken from the nullary terminal symbols. The assigned categories consist of a category that appears at the top of a spine and an additional argument for the initial state of an accepting run. The spines of length 1 are translated directly to secondary categories or initial categories.
Let us make a few general observations that hold for all the categories that appear in derivation trees of : (i) All categories are well-formed. This follows from the fact only well-formed categories occur in the lexicon and all categories in the derivation trees consist of atoms and arguments that were already present in the lexicon. (ii) All primary categories ax | b obey gen(a3) = gen(b1). This is directly required by the rule system.
Finally, we will now describe how to relabel the derivation trees of the CCG that uses categories built using the input symbols of the MPDA . Note that only well-formed categories will occur in derivation trees. Primary and non-primary categories are relabeled differently. The relabeling ρ: Cwf → Δ is defined for every c ∈ Cwf by ρ(ax | b) = τ′(b1) for all primary categories ax | b ∈ Cwf; i.e., gen(a3) = gen(b1). Otherwise ρ(ax) = τ′(a3) for all initial and secondary categories ax ∈ Cwf.
The following property requires that the spine grammar is normalized, so a spine never has the same spine generator as its attached spines.
For all secondary categoriesax | bwe have gen(a3)≠gen(b1).
We are now ready to describe the general form of primary spines of . Given a primary spine c0…cn read from lexicon entry towards the root with n ≥ 1, we know that it starts with a lexicon entry c0 = ax | b ∈ℒ(Δ0) and ends with the non-primary category ax, which as such cannot be further modified. Hence each of the categories c ∈{c0,…,cn−1} has the form ax |1b1… |mbm with m ≥ 1. Let bi = (qi,γi,fi) for every i ∈ [m]. The category cn is relabeled to τ′(a3) and c is relabeled to τ′(qm). Additionally, unless a1 = ⊥, the first components of all atoms in ax have the same spine generator gen(a1) and gen(q1) = ⋯ = gen(qm), but gen(a1)≠gen(q1). Finally, neighboring arguments |i−1bi−1 |ibi in the suffix are coupled such that pop(γi) = qi−1 for all i ∈ [m] ∖{1}. This coupling is introduced in the rules of second degree and preserved by the other rules.
Using these observations, it can be proved that the primary spines of are relabeled to strings of and vice versa. Additionally, spines attach in essentially the same manner in the CCG and using . This yields the following main theorem.
Given a spine grammar, we can construct a CCGthat can generate.
Figure 6 shows part of the derivation tree of CCG that corresponds to the tree of Figure 3a, which is generated by the spinal grammar of Example 11. We use the following abbreviations: , , and . The labeling of the depicted section is δ γ2γ2β2 for the main spine and β η2 for the side spine (see Figure 3a). The corresponding runs of are and .
Let us observe how the transitions of are simulated by . The first transition (q0,ε,ε,q1) on the main spine does not modify the stack. It is implemented by replacing the last argument /(q0,ω,γ) by /(q1,ω,γ). The next transition pushes the symbol υ to the stack. The argument /(q1,ω,γ) is thus replaced by . As the stack grows, an additional argument with the new state and stack symbol is added. The previous argument stores pop(υ) = q3 to ensure that we enter the correct state after popping υ. It also contains the previous unchanged stack symbol ω. The popping transition on the side spine run is realized by removing /(p0,χ,β).
The third components are required to relabel the non-primary categories. At the bottom of the main spine, is a primary category because q0 and are associated with the same spine generator s. Thus, c1 gets relabeled to τ′(q0). However, for c2 = (q0,ω,γ)/(q1,ω,γ) the spine generators of γ and of the output of q1 are different ( and s). Hence it is a non-primary category and gets relabeled to γ.
Concerning the lexicon, c1 is a lexical category due to the fact that can appear at the top of a spine as an initial category with in its third component, while the appended (q0,ω,γ) represents an initial configuration of . Similarly, c2 is a well-formed secondary category of a rule and the third component of its target is in L1. Therefore, it is an element of , which is a subset of the lexicon.
Let us illustrate how the attachment of the side spine to the main spine is realized. The lexicon contains , of which the first two atoms are responsible for performing a transition on the main spine. This part cannot be modified since the rule system disallows it. The target stores the final state of the side spine run in its third component. The appended argument models the initial configuration of the side spine run starting in state p0 with χ on the stack.
For the converse inclusion we utilize Theorem 20 of Kuhlmann et al. (2019). It states that for every CCG there exists an sCFTG that generates the rule trees of . Whereas derivation trees are labeled by categories, rule trees are labeled by lexicon entries at leaves and by applied rules (instead of the output category) at inner nodes. Rule trees are a natural encoding of derivation trees using only a finite set of labels. As each rule indicates the target and last argument of its output category, rule trees can be relabeled in the same manner as derivation trees. For completeness’ sake we restate Definition 16 of Kuhlmann et al. (2019).
Let be a CCG and T = TR,∅(ℒ(Σ)). A tree t ∈T is a rule tree if cat(t) ∈ I, where the partial map cat: T → C(A) is inductively defined by (i) cat(a) = a for all lexicon entries a ∈ℒ(Σ), (ii) for all trees t1,t2 ∈T with cat(t1) = az/b and cat(t2) = by, and (iii) for all t1,t2 ∈T with cat(t1) = by and cat(t2) = az∖b. The set of all rule trees of is denoted by .
We observe that any category relabeling can equivalently be applied to rule trees instead of derivation trees (because a category relabeling only depends on the target a and the last argument | b of a category ax | b). This yields the second main theorem.
CCGs and sCFTGs are strongly equivalent up to relabeling.
Kepser and Rogers (2011) proved that TAGs and sCFTGs are strongly equivalent, which shows that they are also strongly equivalent (up to relabeling) to CCGs.
CCGs and TAGs are strongly equivalent up to relabeling.
Clearly, from strong equivalence we can conclude weak equivalence as well (without the relabeling since the lexicon provides the relabeling). Weak equivalence was famously proven by Vijay-Shanker and Weir (1994), but Theorem 3 of Kuhlmann et al. (2015) shows that the original construction is incorrect. However, Weir (1988) provides an alternative construction and proof. Our contribution provides a stronger form (and proof) of this old equivalence result. It avoids the ε-entries that the original construction heavily relies on. An ε-entry is a category assigned to the empty string; these interspersed categories form the main building block in the original constructions. The necessity of these ε-entries (Vijay-Shanker and Weir, 1994) is an interesting and important question that naturally arises and has been asked by Kuhlmann et al. (2015). We settle this question and demonstrate that they can be avoided.
CCGs and TAGs are weakly equivalent, and CCGs withε-entries and CCGs generate the same (ε-free) languages.
The tree expressive power of CCGs with restricted rule degrees has already been investigated by Kuhlmann et al. (2019). It has been shown that 0-CCGs accept a proper subset of the regular tree languages (Gécseg and Steinby, 1997), whereas 1-CCGs accept exactly the regular tree languages. It remained open whether there is a k such that k-CCGs and (k + 1)-CCGs have the same expressive power. Our construction establishes that 2-CCGs are as expressive as k-CCGs for arbitrary k ≥ 2. Another consequence of our construction is that first-order categories are sufficient.
2-CCGs with first-order categories have the same expressive power ask-CCGs withk > 2.
8 Conclusion
We presented a translation from spine grammar to CCG. Due to the strong equivalence of spine grammar and TAG (Kepser and Rogers, 2011), we can also construct a strongly equivalent CCG for each TAG. Together with the translation from CCG to sCFTG (Kuhlmann et al., 2019), this proves the strong equivalence of TAG and CCG, which means that both formalisms generate the same derivation trees modulo relabelings. Our construction uses CCG rules of degree at most 2, only first-order categories, lexicon entries of arity at most 3, and no ε-entries in the lexicon. Such CCGs thus have full expressive power. Avoiding ε-entries is particularly interesting because they violate the Principle of Adjacency (Steedman, 2000, p. 54), which is a fundamental linguistic principle underlying CCG and requires that all combining categories correspond to phonologically realized counterparts in the input and are string-adjacent. Their elimination is performed by trimming them from the sCFTG obtained from a CCG with ε-entries and translating the trimmed sCFTG back to a CCG using our construction.
Translating CCG to sCFTG (Kuhlmann et al., 2019) yields sCFTGs whose size is exponential in a CCG-specific constant, which depends on the maximal rule degree and the maximal arity of lexicon entries. The increase can be attributed to variables in CCG rules, which need to be properly instantiated. Our construction increases the grammar size only polynomially, which can be verified for each step. Overall, a k-CCG can be converted to an equivalent 2-CCG without ε-entries in time and space exponential in k (and the maximal length of lexicon entries) and polynomial in the size of the grammar.
Acknowledgments
We would like to thank Mark Steedman and the three anonymous reviewers for their valuable and detailed comments, which greatly helped in improving the comprehensibility of this paper. The work of Lena Katharina Schiffer was funded by the German Research Foundation (DFG) Research Training Group GRK 1763 ‘Quantitative Logics and Automata’.