Tree-adjoining grammar (TAG) and combinatory categorial grammar (CCG) are two well-established mildly context-sensitive grammar formalisms that are known to have the same expressive power on strings (i.e., generate the same class of string languages). It is demonstrated that their expressive power on trees also essentially coincides. In fact, CCG without lexicon entries for the empty string and only first-order rules of degree at most 2 are sufficient for its full expressive power.

Combinatory categorial grammar (CCG) (Steedman, 2000; Steedman and Baldridge, 2011) is one of several grammar formalisms that were introduced as an extension of context-free grammars. In particular, CCG extends the classical categorial grammar (Bar-Hillel et al., 1960), which has the same expressivity as context-free grammar, by rules that are inspired by combinatory logic (Curry et al., 1958). CCG is a mildly context-sensitive grammar formalism (Joshi, 1985). Context-sensitive grammar formalisms are formalisms that are efficiently parsable (i.e., in polynomial time) and have expressivity beyond the context-free languages. They are able to express a limited amount of cross-serial dependencies and have the constant growth property. Because of these features and its notion of syntactic categories, which is quite intuitive for natural languages, CCG has become widely applied in compuational linguistics (Steedman, 2000). Further, it can be enhanced by semantics through the lambda calculus.

CCG is based on a lexicon and a rule system. The lexicon assigns syntactic categories to the symbols of an input string and the rule system describes how neighboring categories can be combined to new categories. Each category has a target, which is similar to the return type of a function, and optionally, a number of arguments. Different from functions, each argument has a directionality that indicates if it is expected on the left or the right side. If repeated combination of categories leads to a (binary) derivation tree that comprises all input symbols and is rooted in an initial category, then the input string is accepted.

When defining CCG, there are many degrees of freedom yielding a number of different variants (Steedman, 2000; Baldridge, 2002; Steedman and Baldridge, 2011; Kuhlmann et al., 2015). This is a consequence of the linguistically motivated need to easily express specific structures that have been identified in a particular theory of syntax for a given natural language. However, we and others (Kuhlmann et al., 2015) are interested in the expressive power of CCGs as generators of formal languages, since this allows us to disentangle the confusion of subtly different formalisms and identify the principal structures expressible by a common core of the formalisms. As linguistic structure calls for a representation that goes beyond strings, we aim for a characterization of expressive power in terms of the generated trees.

The most famous result on the expressive power of CCG is by Vijay-Shanker and Weir (1994), showing that tree-adjoining grammar (TAG), linear-indexed grammar (LIG), head grammar (HG), and CCG generate the same string languages. An equivalent automaton model is the embedded push-down automaton (Vijay-Shanker, 1988). In the definition of CCG used by Vijay-Shanker and Weir (1994), the lexicon allows ε-entries, which assign syntactic categories to the empty string ε. Their rule system restricts rules to specific categories and limits the rule degree. CCG with unbounded rule degree are Turing-complete (Kuhlmann et al., 2018). Prefix-closed CCG without target restrictions, in which the rules obey special closure properties, are less powerful. This even holds for multimodal CCGs (Kuhlmann et al., 2010, 2015), which allow many types of directionality indicators (i.e., slashes).

When going beyond the level of string languages, there exist different notions of strong generative power. We consider two formalisms as strongly equivalent if their generated derivation tree languages coincide modulo relabelings. For example, the well-known local and regular tree grammars (Gécseg and Steinby, 1997) are strongly equivalent. On the other hand, Hockenmaier and Young (2008) regard two formalisms as strongly equivalent if they capture the same sets of dependencies. Then there exist specific scrambling cases whose dependencies can be expressed by their CCG, but not by Lexicalized TAG (LTAG). Their CCG are syntactically more expressive than ours and allow type-raising, whereas the strong generative capacity (in our sense) of LTAG is strictly smaller than that of TAG (Kuhlmann and Satta, 2012). The dependencies expressed by CCG without rule restrictions and TAG are shown to be incomparable by Koller and Kuhlmann (2009).

Returning to our notion of strong generative capacity, Kuhlmann et al. (2019) investigated the tree-generative capacity of CCG without ε-entries. The generated trees are always binary. CCG with application and first-degree composition rules generate exactly the regular tree languages (Gécseg and Steinby, 1997). Without the composition rules, only a proper subset can be generated. The languages of CCG rule trees (i.e., trees labeled by applied rules instead of categories) with bounded rule degree can also be generated by simple monadic context-free tree grammar (sCFTG).

For the converse direction, we show that the tree languages generated by sCFTG can also be generated by CCG, which shows strong equivalence. This answers several open questions. Since sCFTG and TAG are strongly equivalent (Kepser and Rogers, 2011), our result also shows strong equivalence of CCG and TAG. In contrast to the construction of Vijay-Shanker and Weir (1994), which relies heavily on ε-entries, our construction avoids them and shows that they do not increase the expressive power of CCG. Additionally, we only use rules up to degree 2 and first-order categories (i.e., arguments are atomic), which shows that larger rule degree or higher-order categories do not increase the expressive power.

Our construction proceeds roughly as follows. We begin with a spine grammar, which is a variant of sCFTG that is also strongly equivalent to TAG. We encode its spines using a context-free grammar, which in turn can be represented by a special variant of push-down automata. Finally, the runs of the push-down automaton are simulated by a CCG such that the stack operations of the automaton are realized by adding and removing arguments of the categories.

The nonnegative integers are ℕ. For every k ∈ℕ, we let [k] = {i ∈ℕ∣1 ≤ ik}. The set Σ* contains all strings over the finite set Σ including the empty string ε. We let Σ + = Σ*∖{ε}. The length of w ∈ Σ* is $|w|$, and concatenation is written as juxtaposition. The prefixes Pref(w) of a string w ∈ Σ* are {u ∈ Σ*∣∃v ∈ Σ*: w = uv}. A string language is a subset L ⊆ Σ*. Given a relation ⇒⊆ S2, we let ⇒* be the reflexive, transitive closure of ⇒.

### 2.1 Tree Languages

In this paper, we only deal with binary trees since the derivation trees of CCGs are binary. Thus, we build trees over ranked sets Σ = Σ0 ∪ Σ1 ∪ Σ2. If Σ is an alphabet, then it is a ranked alphabet. For every k ∈{0, 1, 2}, we say that symbol a ∈ Σk has rank k. We write $TΣ2,Σ1(Σ0)$ for the set of all trees over Σ, which is the smallest set T such that c(t1,…,tk) ∈ T for all k ∈{0,1,2}, c ∈ Σk, and t1,…,tkT. As usual, we write just a for leaves a() with a ∈ Σ0. A tree language is a subset $T⊆TΣ2,∅(Σ0)$. Let $T=TΣ2,Σ1(Σ0)$. The map $pos:T→P+([2]*)$ assigns Gorn tree addresses (Gorn, 1965) to a tree, where $P+(S)$ is the set of all nonempty subsets of S. Let
$pos(c(t1,…,tk))={ε}∪⋃i∈[k]w∈pos(ti){iw}$
for all k ∈{0,1,2}, c ∈ Σk and t1,…,tkT. The set of all leaf positions of t is defined as $leaves(t)=w∈pos(t)∣w1∉pos(t)$. Given a tree tT and a position w ∈pos(t), we write t|w and t(w) to denote the subtree rooted in w and the symbol at w, respectively. Additionally, we let t[t′]w be the tree obtained when replacing the subtree appearing in t at position w by t′T. Finally, let $yield:T→Σ0+$ be defined by yield(a) = a for all a ∈ Σ0 and $yieldc(t1,…,tk)=yield(t1)⋯yield(tk)$ for all k ∈ [2], c ∈ Σk, and t1,…,tkT.

The special leaf symbol $□$ is reserved and represents a hole in a tree. The set $CΣ2,Σ1(Σ0)$ of contexts contains all trees of $TΣ2,Σ1Σ0∪{□}$, in which $□$ occurs exactly once. We write $pos□(C)$ to denote the unique position of $□$ in the context $C∈CΣ2,Σ1(Σ0)$. Moreover, given tT we simply write C[t] instead of $C[t]pos□(C)$.

A tuple (ρ0,ρ1,ρ2) is called a relabeling if ρk: ΣkΔk for all k ∈{0,1,2} and ranked set Δ. It induces the map $ρ:T→TΔ2,Δ1(Δ0)$ given by $ρc(t1,…,tk)=ρk(c)ρ(t1),…,ρ(tk)$ for all k ∈{0,1,2}, c ∈ Σk and t1,…,tkT.

### 2.2 Combinatory Categorial Grammar

In the following, we give a short introduction to CCG. Given an alphabet A of atoms or atomic categories and a set of slashesD = {/,∖} indicating directionality, the set of categories is defined as $C(A)=TD,∅(A)$. We usually write the categories in infix notation and the slashes are left-associative by convention, so each category takes the form c = a |1c1 ⋯ |kck where aA, |iD, $ci∈C(A)$ for all i ∈{1,…,k}. The atom a is called the target of c and written as tar(c). The slash-category pairs |ici are called arguments and their number k is called the arity of c and denoted by ar(c). In addition, we write arg(c,i) to get the i-th argument |ici of c. In the sense of trees, the sequence of arguments is a context $□|1c1⋯|kck$. The set of argument contexts is denoted by $A(A)⊆CD,∅(A)$. We distinguish between two types of categories. In first-order categories, all arguments are atomic, whereas in higher-order categories, the arguments can have arguments themselves.

Next, we describe how 2 neighboring categories can be combined. Intuitively, the direction of the slash determines on which side a category matching the argument is expected. Hence there are 2 types of rules. Despite the conventions for inference systems, we put the inputs (premises) below and the output (conclusion) above to make the shape of the proof tree apparent. A rule of degree k with k ∈ℕ has one of the following forms:
$ax|1c1⋯|kckax/cc|1c1⋯|kck(forward rule)ax|1c1⋯|kckc|1c1⋯|kckax∖c(backward rule)$
where aA, $c∈C(A)∪{y}$, |iD, and $ci∈C(A)∪{yi}$ for all i ∈ [k]. Here, y,y1,…,yk are category variables that can match any category in $C(A)$ and x is an argument context variable that can match any argument context in $A(A)$. The category taking the argument (ax | c with |∈ D) is called primary category, the one providing it (c |1c1⋯ |kck) is called secondary category, and they are combined to an output category (ax |1c1 ⋯ |kck). Given rule r, we write sec(r) to refer to the secondary category. Rules of degree 0 will be referred to as application rules, while rules of higher degree are composition rules. We write $R(A)$ for the set of all rules over A. A rule system is a pair Π = (A,R), where A is an alphabet and $R⊆R(A)$ is a finite set of rules over A. Given a rule rR, we obtain a ground instance of it by replacing the variables {y,y1,… } by concrete categories and the variable x by a concrete argument context. The ground instances of Π induce a relation $→Π⊆C(A)2×C(A)$ and we write $c′′cc′Π$ instead of (c,c′) →Πc′′. The relation →Π extends to a relation $⇒Π⊂(C(A)*)2$ on sequences of categories. It is given by
$⇒Π=⋃φ,ψ∈C(A)*(φcc′ψ,φc′′ψ)|c′′cc′Π$
A combinatory categorial grammar (CCG) is a tuple $G=(Σ,A,R,I,L)$ that consists of an alphabet Σ of input symbols, a rule system (A,R), a set IA of initial categories, and a finite relation $L⊆Σ×C(A)$ called lexicon. It is called k-CCG if each rule rR has degree at most k, where k ∈ℕ.
The CCG $G$generates the category sequences $CG⊆C(A)*$ and the string language $L(G)⊆Σ*$ given by
$CG=⋃a0∈Iφ∈C(A)*|φ⇒(A,R)*a0$
and $L(G)=L−1(CG)$, where the string language $L(G)$ contains all strings that can be relabeled via the lexicon to a category sequence in $CG$. A tree $t∈TC(A),∅(L(Σ))$ is called derivation tree of$G$ if $t(w)t(w⋅1)t(w⋅2)(A,R)$ for every w ∈pos(t) ∖leaves(t). We denote the set of all derivation trees of $G$ by $D(G)$.
A category relabeling$ρ:C(A)→Δ$ is a relabeling such that ρ(c) = ρ(c′) for all categories $c,c′∈C(A)$ with tar(c) = tar(c′) and $argc,ar(c)=argc′,ar(c′)$. The relabeled derivation trees $Tρ(G)⊆TΔ2,∅(Δ0)$ are given by
$Tρ(G)=ρ(t)|t∈D(G),t(ε)∈I$
A tree language $T⊆TΔ2,∅(Δ0)$ is generatable by $G$ if there is a category relabeling $ρ′:C(A)→Δ$ such that $T=Tρ′(G)$.
Example 1.
Let $G=(Σ,A,R(A,2),{⊥},L)$ with Σ = {α,β,γ,δ} and A = {⊥,a,b,c,d,e} be a CCG with the lexicon ℒ given below, where $R(A,2)$ is the set of all rules over A up to degree 2. Thus, it is a 2-CCG.
$L(α)={a,b}L(β)={c∖b,c∖b∖e,e,e/e}L(γ)={d/c,c∖a/c}L(δ)={⊥/d}$
$⊥x∖a/c⊥x/cc∖a/c$ is a forward rule of degree 2 in $R(A,2)$, where x is an argument context and can thus be replaced by an arbitrary sequence of arguments. Utilizing $x=□∖a$ yields the ground instance $⊥∖a/a/c⊥∖a/cc∖a/c$, which has primary category c1 = ⊥∖a/c and secondary category c2 = ca/c. The latter has target tar(c2) = c and the two arguments ∖a and /c, so its arity is ar(c2) = 2.

A derivation tree of $G$ is depicted in Figure 1. We start at the bottom with categories taken from the lexicon in accordance with the input symbols. Then neighboring categories are combined until we arrive at the root with initial category ⊥, so the input word is accepted.

Figure 1:

CCG derivation tree (see 1).

Figure 1:

CCG derivation tree (see 1).

Close modal

We start by introducing a Moore variant of push-down automata (Autebert et al., 1997) that is geared towards our needs and still accepts the context-free languages (of strings of length ≥ 2). It will be similar to the push-down Moore machines of Decker et al. (2013). Instead of processing input symbols as part of transitions (as in Mealy machines), Moore machines output a unique input symbol in each state (Fleischner, 1977). For every set Γ, we let Γ≤1 = {ε}∪ Γ and Γ≥2 = {w ∈ Γ*∣2 ≤|w|} be the sets of strings over Γ of length at most 1 and at least 2, respectively.

Definition 2.

A Moore push-down automaton (MPDA) $A=(Q,Σ,Γ,δ,τ,I,F)$ consists of (i) finite sets Q, Σ, and Γ of states, input symbols, and stack symbols, respectively, (ii) a set δ ⊆ (Q × Γ≤1 × Γ≤1 × Q) ∖ (Q × Γ × Γ × Q) of transitions, (iii) an output function τ : Q → Σ, and (iv) sets I,FQ of initial and final states, respectively.

Due to the definition of δ, in a single step we can either push or pop a single stack symbol or ignore the stack. Note that we explicitly exclude the case where a symbol is popped and another symbol is pushed at the same time. In the following, let $A=(Q,Σ,Γ,δ,τ,I,F)$ be an MPDA. On the set $ConfA=Q×Γ*$ of configurations of $A$ the move relation$⊩A⊆ConfA2$ is
$⊩A=⋃(q,γ,γ′,q′)∈δ〈q,γα〉,〈q′,γ′α〉|α∈Γ+$
and a configuration $〈q,α〉∈ConfA$ is initial (respectively, final) if qI and α ∈ Γ (respectively, qF and α = ε). An accepting run is a sequence $ξ0,…,ξn∈ConfA$ of configurations that are successively related by moves (i.e., $ξi−1⊩Aξi$ for all i ∈ [n]), starts with an initial configuration ξ0, and finishes in a final configuration ξn. In other words, we can start in an initial state with an arbitrary symbol on the stack and finish in a final state with the empty stack, and for each intermediate step there has to exist a transition. The language accepted by $A$ contains exactly those strings w ∈ Σ*, for which there exists an accepting run 〈q0, α0〉,…,〈qn,αn〉 such that w = τ(q0)⋯τ(qn). Thus, we accept the strings that are output symbol-by-symbol by the states attained during an accepting run. As usual, two MPDA are equivalent if they accept the same language. Since no initial configuration is final, each accepting run has length at least 2, so we can only accept strings of length at least 2. While we could adjust the model to remove this restriction, the presented version serves our later purposes best.
Theorem 3.

MPDA accept the context-free languages of strings of length at least 2.

The MPDA $A$ is pop-normalized if there exists a map pop: Γ → Q such that q′ = pop(γ) for every transition (q,γ,ε,q′) ∈ δ. In other words, for each stack symbol γ ∈ Γ there is a unique state pop(γ) that the MPDA enters whenever γ is popped from the stack.

Later on, we will simulate the runs of an MPDA in a CCG such that subsequent configurations are represented by subsequent primary categories. Popping transitions are modeled by removing the last argument of a category. Thus, the target state has to be stored in the previous argument. This argument is added when the according pushing transition is simulated, so at that point we already have to be aware in which state the MPDA will end up after popping the symbol again. This will be explained in more detail in Section 7.

We can easily establish this property by storing a state in each stack symbol. Each pushing transition is replaced by one variant for each state (i.e., we guess a state when pushing), but when a symbol is popped, this is only allowed if the state stored in it coincides with the target state.

Lemma 4.

For every MPDA we can construct an equivalent pop-normalized MPDA.

The next statement shows that we can provide a form of look-ahead on the output. In each new symbol we store the current as well as the next output symbol. Standard techniques can be used to prove the statement. We will briefly sketch why this look-ahead is necessary. Before constructing the CCG, the MPDA will be used to model a spine grammar. The next output symbol of the MPDA corresponds to the label of the parent node along a so-called spine of a tree generated by the spine grammar. From this parent node we can determine the possible labels of its other child. This information will be used in the CCG to control which secondary categories are allowed as neighboring combination partners.

Lemma 5.
For every context-free languageL ⊆ Σ*and∉Σ, the language Next(L) is context-free, where
$Next(L)=⋃n∈N,σ1,…,σn∈Σσ1,…,σn∈L〈σ2,σ1〉⋯〈σn,σn−1〉〈⊲,σn〉$
Corollary 6.

For every context-free languageL ⊆ Σ≥2there exists a pop-normalized MPDA $A$such that$L(A)=Next(L)$.

Now we move on to representations of tree languages. We first recall context-free tree grammars (Rounds, 1969), but only the monadic simple variant (Kepser and Rogers, 2011).

Definition 7.

A simple monadic context-free tree grammar (sCFTG) is a tuple $G=(N,Σ,S,P)$ consisting of (i) disjoint ranked alphabets N and Σ of nonterminal and terminal symbols with N = N1N0 and Σ1 = , (ii) a nullary start nonterminalSN0, and (iii) a finite set PP0P1 of productions, where $P0=N0×TΣ2,N1(N0∪Σ0)$ and $P1=N1×CΣ2,N1(N0∪Σ0)$.

In the following let $G=(N,Σ,S,P)$ be an sCFTG. We write (n,r) ∈ P simply as nr. Given $t,u∈TΣ2,N1(Σ0∪N0)$ we let $t⇒Gu$ if there exist (nr) ∈ P and a position w ∈pos(t) such that (i) t|w = n and u = t[r]w with nN0, or (ii) t|w = n(t′) and u = t[r[t′]]w with nN1 and $t′∈TΣ2,N1(Σ0∪N0)$. The tree language $T(G)$ generated by $G$ is
$T(G)={t∈TΣ2,∅(Σ0)∣S⇒G*t}$
The sCFTG $G′$ is strongly equivalent to $G$ if $T(G)=T(G′)$, and it is weakly equivalent to $G$ if $yieldT(G)=yieldT(G′)$.

Spine grammars (Fujiyoshi and Kasai, 2000) are a restriction on simple monadic context-free tree grammars that remain equally expressive by Lemma 5.4 of Fujiyoshi and Kasai (2000) modulo relabelings. Let us clarify this result. Clearly, each spine grammar is itself an sCFTG and for each sCFTG $G$ there exists a spine grammar $G′$ and a relabeling ρ such that $T(G)={ρ(t)∣t∈T(G′)}$. Although sCFTGs are more established, we elect to utilize spine grammars because of their essential notion of spines.

Definition 8.

The sCFTG $G$ is a spine grammar if there exists a map d: Σ2 →{1,2} such that $wi∈Pref(pos□(C))$ with i = d(C(w)) for every production (nC) ∈ P with nN1 and $w∈Pref(pos□(C))$ with C(w) ∈ Σ2.

Henceforth let $G$ be a spine grammar with map d: Σ2 →{1,2}. Consider a production(nC) ∈ P with nN1. The spine of C is simply the path from the root of C to the unique occurrence $pos□(C)$ of $□$. The special feature of a spine grammar is that the symbols along the spine indicate exactly in which direction the spine continues. Since only the binary terminal symbols offer branching, the special feature of spine grammars is the existence of a map d that tells us for each binary terminal symbol σ ∈ Σ2 whether the spine continues to the left, in which case d(σ) = 1, or to the right, in which case d(σ) = 2. This map d, called spine direction, applies to all instances of σ in all productions with spines. In the original definition of spine grammars (Fujiyoshi and Kasai, 2000, Definition 3.2), only nonterminal symbols have a spine direction. By creating copies of binary terminal symbols we can show that both variants are equivalent modulo relabelings.

Definition 9.

Spine grammar $G$ is in normal form if each (nr) ∈ P is of the form (i) start:r = b(α) or r = α for some bN1 and α ∈ Σ0, (ii) chain:$r=b1(b2(□))$ for some b1,b2N1, or (iii) terminal:$r=σ(□,a)$ or $r=σ(a,□)$ for some σ ∈ Σ2 and aN0S.

In spine grammars in normal form, the initial nonterminals are isolated and cannot occur on the right-hand sides. The 3 production types of the normal form are illustrated in Figure 2. Using a single start production followed by a number of chain and terminal productions, a nullary nonterminal n can be rewritten to a tree t that consists of a spine of terminals, where each non-spinal child is a nullary nonterminal. Formally, for every nullary nonterminal nN0 let
$IG(n)={t∈TΣ2,∅(Σ0∪N0)∣n(⇒G;⇒G′*)t}$
where $G′$ is the spine grammar $G$ without start productions; that is, $G′=(N,Σ,S,P′)$ with productions P′ = {(nr) ∈ PnN1}. So we perform a single derivation step using the productions of $G$ followed by any number of derivation steps using only productions of $G′$. The elements of $IG(n)$ are called spinal trees for n and their spine generator is n. By a suitable renaming of nonterminals we can always achieve that the spine generator does not occur in any of its spinal trees. The spine grammar $G$ is normalized if it is in normal form and $IG(n)⊆TΣ2,∅(Σ0∪(N0∖{n}))$ for every nullary nonterminal nN0.
Figure 2:

Types of productions of spine grammars in normal form (see 9).

Figure 2:

Types of productions of spine grammars in normal form (see 9).

Close modal

The following result is a variant of Theorem 1 of Fujiyoshi and Kasai (2000).

Theorem 10.

For every spine grammar there is a strongly equivalent normalized spine grammar.

Example 11.
We define the spine grammar $G=(N,Σ,{s},P)$ with N1 = {t,a,b,c,b′,e}, $N0={s,a¯,b¯,c¯,e¯}$, Σ2 = {α2,β2,γ2,η2}, Σ0 = {α,β,γ,δ}, and P as shown below.
$s→t(δ)b¯→e(β)a¯→αt→a(b′(□))a→α2(a¯,□)b¯→βb′→b(c(□))b→β2(□,b¯)c-→γb→a(b′(□))c→γ2(□,c¯)e¯→βe→e(e(□))e→η2(e¯,□)$
The tree in Figure 3a, in which the spines are marked by thick edges, is generated by $G$. The spinal tree corresponding to the main spine of the depicted tree is shown in Figure 3b. The yield of $T(G)$ is {αnδ γnβmn,m ≥ 1}.
Figure 3:

Tree generated by spine grammar $G$, a spinal tree in $IG(s)$ (see 11), and a tree in $F(S(G))S$ reassembled from spines (see 16).

Figure 3:

Tree generated by spine grammar $G$, a spinal tree in $IG(s)$ (see 11), and a tree in $F(S(G))S$ reassembled from spines (see 16).

Close modal

Figure 4 shows the elementary trees of an example TAG. Only tree 1 is initial and foot nodes are marked by a superscript asterisk ⋅* on the label. Whenever adjunction is forbidden (i.e., empty set as selective adjunction constraint and non-obligatory adjunction), we omit the constraints altogether. Otherwise, the constraints are put next to the label. For example, {2,3} + indicates that tree 2 or 3 must ( + = obligatory) be adjoined.

Figure 4:

Figure 4:

Close modal

We briefly sketch the transformation from TAG to sCFTG by Kepser and Rogers (2011). TAG is a notational variant of footed simple CFTG, in which all variables in right-hand sides of productions appear in order directly below a designated foot node. To obtain an sCFTG, the footed simple CFTG is first converted into a spine grammar, where the spine is the path from the root to the foot node, and then brought into normal form using the construction of Fujiyoshi and Kasai (2000). The spine grammar of 11 is strongly equivalent to the TAG shown in Figure 4.

We proceed with the construction starting from the normalized spine grammar $G$. First, we will construct a context-free grammar (CFG) that captures all information of $G$. It represents the spinal trees (from bottom to top) as strings and enriches the symbols with the spine generator (initialized by start productions and preserved by chain productions) and a non-spinal child (given by terminal productions). The order of these annotations depends on the spine direction of the symbol. The leftmost symbol of the generated strings has only a spine generator annotated since the bottom of the spine has no children. To simplify the notation, we write ng for (n,g) ∈ N2, αn for (α,n) ∈ Σ0 × N, and $σn1n2$ for (σ,n1,n2) ∈ Σ2 × N2.

Definition 12.
Let $G$ be normalized and ⊤∉N. The spines $S(G)=L(G′)$ of $G$ are the strings generated by the CFG $G′=({⊤}∪(N2),Σ′,⊤,P′)$ with Σ′ = (Σ0 × N) ∪ (Σ2 × N2) and productions P′ = P0P1P2 given by
$P0=⊤→αn|(n→α)∈P∪⊤→αnbn|n→b(α)∈PP1=⋃g∈Nng→bg′bg|n→bb′(□)∈PP2=⋃g∈Nng→σgn′|n→σ(□,n′)∈P∪ng→σn′g|n→σ(n′,□)∈P$
Example 13.
We list some corresponding productions of the spine grammar $G$ (left) of 11 and the CFG $G′$ (right) for its spines $S(G)$.
$a¯→α:⊤→αa¯s→t(δ):⊤→δstst→a(b′(□)):ts→bs′astb¯→bb¯′ab¯…a→α2(a¯,□):as→α2a¯sab¯→α2a¯b¯…$
Note that for each start production we obtain a single production since the nonterminal on the left side becomes the spine generator. On the other hand, for each chain or terminal production we have to combine them with all nonterminals, as we do not know the spine generator of the nonterminal on the left side of the original production. When a string is derived, the spine generators are pulled through originating from start productions and are consistent throughout the string. The language generated by $G′$ is
$S(G)=δsγ2sc-nβ2sb-α2āsn|n≥1∪βb-η2ēb-m|m≥0∪αa,βe,γc$

Note that each string generated by the CFG belongs to (Σ0 × N)(Σ2×N2)*. Next we define how to reassemble those spines to form trees again, which then relabel to the original trees generated by $G$. The operation given in the following definition describes how a string generated by the CFG can be transformed into a tree by attaching subtrees in the non-spinal direction of each symbol, whereby the non-spinal child annotation of the symbol and the spinal annotation of the root of the attached tree have to match.

Definition 14.
Let $T⊆TΣ2×N,∅(Σ0×N)$ and wA with A = (Σ0 × N)(Σ2×N2)*. The generator gen: (Σ0 × N) ∪ (Σ2 × N2) → N is the nonterminal in spine direction and is given by
$gen(a)=nifa=αn∈Σ0×Nnd(σ)ifa=σn1n2∈Σ2×N2$
For nN, let Tn = {tT | gen(t(ε)) = n} be those trees of T whose root label has n annotated in spinal direction. We define the tree language $attT(w)⊆TΣ2×N,∅(Σ0×N)$ recursively by attT(αn) = {αn} for all αn ∈ Σ0 × N, and
$attTwσn1n2=σn1n2(t1,t2)|td(σ)∈attT(w)t3−d(σ)∈Tn3−d(σ)$
for all wA and $σn1n2∈Σ2×N2$.

To obtain the tree language defined by $G$, it is necessary to apply this operation recursively on the set of spines.

Definition 15.

Let L ⊆ (Σ0 × N)(Σ2×N2)*. We inductively define the tree language $F(L)$ generated by L to be the smallest tree language $F$ such that $attF(w)⊆F$ for every wL.

Example 16.

The CFG $G′$ of 13 generates the set of spines $S(G)$ and $F(S(G))S$ contains the correctly assembled trees formed from these spines. Figure 3c shows a tree of $F(S(G))S$ since the generator of the main spine is S = s, which is stored in spinal direction in the root label $α2ās$. We can observe the correspondence of annotations in non-spinal direction and the spine generator of the respective child in the same direction.

Next we prove that $F(S(G))S$ and $T(G)$ coincide modulo relabeling. This shows that the context-free language $S(G)$ of spines completely describes the tree language $T(G)$ generated by $G$.

Theorem 17.

Let$G$be normalized. Then$π(F(S(G))S)=T(G)$, where the relabelingπ: (Σ0 × N) ∪ (Σ2 × N2) → Σ0 ∪ Σ2is given byπ(αn) = α and $π(σn1n2)=σ$for allα ∈ Σ0, σ ∈ Σ2, and n,n1,n2N.

Corollary 18.

There exists a pop-normalized MPDA$A$such that$L(A)∪L1=Next(S(G))$, where $L1={w∈Next(S(G))∣|w|=1}$. Moreover, $F(L(A)∪L1)S$ and $T(G)$coincide modulo relabeling.

Example 19.

The MPDA constructed in 18 for the spine grammar $G$ of 11 is depicted in Figure 5. Initial states are indicated using a start marker and final states are marked by a double circle. Pushing and popping stack operations are written with downwards and upwards arrows, respectively. The MPDA consists of two components. The bigger one describes the main spine, and the smaller one describes the side spine. The distinction between the three stack symbols is necessary due to pop-normalization. The distinction between q1 and $q1′$ (and similar states) is necessary because their previous action distinguishes their produced input symbol since we recognize $Next(S(G2))$. For example, $τ(q1)=(γ2sc-,γ2sc-)$ and $τ(q1′)=(β2sb-,γ2sc-)$. Similarly, τ(p1) = (z,z) and $τ(p1′)=(⊲,z)$ where $z=η2ēb-$. To completely capture the behavior of $G$, we additionally require the set $L1={(⊲,αā),(⊲,βb-),(⊲,βē),(⊲,γc-)}$, which contains the spines of length 1.

Figure 5:

Sample MPDA (see 6).

Figure 5:

Sample MPDA (see 6).

Close modal

In this section, let $G=(N,Σ,S,P)$ be a normalized spine grammar with spine direction d: Σ →{1,2} and $A=(Q,Δ,Γ,δ,τ,I,F)$ the pop-normalized MPDA constructed in 18 with pop: Γ → Q. We note that Δ = Σ′× Σ′′ with Σ′ = {⊲}∪ (Σ2 × N2) as well as Σ′′ = (Σ0 × N) ∪ (Σ2 × N2). Moreover, let ⊥∉Q be a special symbol. To provide better access to the components of the MPDA $A$, we define some additional maps.

The spine generator gen: QN is given for every state qQ by gen(q) = gen(s2), where τ(q) = (s1,s2) ∈ Δ. Since $A$ cannot accept strings of length 1, we have to treat them separately. Let $L1={w∈Next(S(G))∣|w|=1}$ and gen: L1N be given by gen(w) = n for all w = (⊲,αn) ∈ L1. We extend τ : QΔ to τ′′: (QL1) → Δ by τ′(q) = τ(q) for all qQ and τ′(a) = a for short strings aL1.

Recall that D = {/,∖}. The slash type slash: (QF) → D and combining nonterminal comb: (QF) ∪{⊥}→ N of a state qQF tell whether the symbol τ(q) generated by state q occurs as the first or second child of its parent symbol and with which spine generator it is combined. Let $τ(q)=(σn1n2,s2)$ with $σn1n2∈Σ2×N2$ and s2 ∈ Σ′′. The slash type and the combining nonterminal can be determined from the next symbol $σn1n2$. Formally, slash(q) =/ if d(σ) = 1 and slash(q) = ∖ otherwise. In addition, comb(q) = n3−d(σ) and comb(⊥) = S.

We simulate the accepting runs of $A$ in the spines consisting of primary categories of the CCG. The main idea is that the primary categories on the spine store the current configuration of $A$. This is achieved by adding an additional argument for transitions that push a symbol, whereas for each popping transition, an argument is removed. The rightmost argument stores the current state in the first component and the top of the stack in the second component. The previous arguments store the preceding stack symbols in their second components and the state the automaton returns to when the stack symbol stored in the next argument is popped in the first components. To implement the required transformations of consecutive primary categories, the secondary categories need to have a specific structure. This mandates that the categories at the top of a spine (which act as secondary categories unless they belong to the main spine) cannot store their corresponding automaton state in the first component of the last argument as usual, but instead utilize the third component of their target. Thus each argument stores the final state corresponding to its secondary combination partner in the third component. This third component also allows us to decide whether a category is primary: A category is a primary category if and only if the spine generator of the state stored in the first component of the last argument and the spine generator of the state stored in the last component of the target coincide. This is possible since $G$ is normalized, which yields that attaching spines have a spine generator that is different from the spine generator of the spine that they attach to.

Definition 20.

We define the CCG $GA,L1=(Δ0,A,R,I′,L)$ as follows:

Let A = {(q,γ,f) ∈ A′∣gen(f) = comb(q)} with A′ = (Q ∪{⊥}) × Γ × (FL1). We use ai to refer to the i-th component of an atom aA. Additionally, let I′ = {(⊥,ε,f) ∈ A∣gen(f) = S}.

In the rules $R=⋃|∈D(R1|∪R2|∪R3|)$ we underline the primary category axb, which always needs to fulfill gen(a3) = gen(b1).
$R1/=⋃a,b,c∈A,|∈D(b1,ε,ε,c1)∈δb2=c2ax|cax/b̲b|c$
(1)
$R2/=⋃a,b,c,e∈A,|,|′∈D(b1,ε,e2,e1)∈δb2=c2c1=pop(e2)ax|c|′eax/b̲b|c|′e$
(2)
$R3/=⋃a,b∈A(b1,b2,ε,q)∈δaxax/b̲b$
We listed all the forward rules, but for each forward rule there also exists a symmetric backward rule yielding the rule sets $R1∖$, $R2∖$, and $R3∖$.
We need some notions for the lexicon. A category $c∈C(A)$ is well-formed if | = slash(b1) and b1Q for every i ∈ [ar(c)] with | b = arg(c,i). Let $Cwf={c∈C(A)∣cwell-formed}$ be the set of well-formed categories. Clearly I′Cwf. In addition, we introduce sets $⊤L1$ and $⊤A$ of top-of-spine categories derived from the short strings of L1 and the strings accepted by $A$, respectively:
$⊤L1={a∈I′∣a3∈L1}∪⋃r∈Rax=sec(r){ax∈Cwf∣a3∈L1}⊤A={a∈I′∣a3∈F}∪⋃r∈Rax=sec(r){ax∈Cwf∣a3∈F}$
Note that $⊤L1∪⊤A⊆Cwf$. Now we can define the lexicon as follows for all αΔ0 = Σ′× (Σ0 × N):
$L(α)=ax|ax∈⊤L1τ′(a3)=α∪ax|b∈Cwf|ax∈⊤Ab1∈Igen(a3)=gen(b1)pop(b2)=a3τ′(b1)=α$

Each atom of A consists of three components. The first component stores the current state of $A$ (or the special symbol ⊥), the second component stores the current symbol at the top of the stack, and the third component stores the final state corresponding to the combining category of the attaching side spine. With this intuition, the rule system directly implements the transitions of $A$.

The lexicon assigns categories to symbols that can label leaves, so these symbols are taken from the nullary terminal symbols. The assigned categories consist of a category that appears at the top of a spine and an additional argument for the initial state of an accepting run. The spines of length 1 are translated directly to secondary categories or initial categories.

Let us make a few general observations that hold for all the categories that appear in derivation trees of $GA,L1$: (i) All categories are well-formed. This follows from the fact only well-formed categories occur in the lexicon and all categories in the derivation trees consist of atoms and arguments that were already present in the lexicon. (ii) All primary categories ax | b obey gen(a3) = gen(b1). This is directly required by the rule system.

Finally, we will now describe how to relabel the derivation trees $D(GA,L1)$ of the CCG $GA,L1$ that uses categories built using the input symbols of the MPDA $A$. Note that only well-formed categories will occur in derivation trees. Primary and non-primary categories are relabeled differently. The relabeling ρ: CwfΔ is defined for every cCwf by ρ(ax | b) = τ′(b1) for all primary categories ax | bCwf; i.e., gen(a3) = gen(b1). Otherwise ρ(ax) = τ′(a3) for all initial and secondary categories axCwf.

The following property requires that the spine grammar $G$ is normalized, so a spine never has the same spine generator as its attached spines.

Lemma 21.

For all secondary categoriesax | bwe have gen(a3)≠gen(b1).

We are now ready to describe the general form of primary spines of $GA,L1$. Given a primary spine c0cn read from lexicon entry towards the root with n ≥ 1, we know that it starts with a lexicon entry c0 = ax | b ∈ℒ(Δ0) and ends with the non-primary category ax, which as such cannot be further modified. Hence each of the categories c ∈{c0,…,cn−1} has the form ax |1b1… |mbm with m ≥ 1. Let bi = (qi,γi,fi) for every i ∈ [m]. The category cn is relabeled to τ′(a3) and c is relabeled to τ′(qm). Additionally, unless a1 = ⊥, the first components of all atoms in ax have the same spine generator gen(a1) and gen(q1) = ⋯ = gen(qm), but gen(a1)≠gen(q1). Finally, neighboring arguments |i−1bi−1 |ibi in the suffix are coupled such that pop(γi) = qi−1 for all i ∈ [m] ∖{1}. This coupling is introduced in the rules of second degree and preserved by the other rules.

Using these observations, it can be proved that the primary spines of $GA,L1$ are relabeled to strings of $Next(S(G))$ and vice versa. Additionally, spines attach in essentially the same manner in the CCG and using $F$. This yields the following main theorem.

Theorem 22.

Given a spine grammar$G$, we can construct a CCG$G′$that can generate$T(G)$.

Example 23.

Figure 6 shows part of the derivation tree of CCG $GA,L1$ that corresponds to the tree of Figure 3a, which is generated by the spinal grammar $G$ of 11. We use the following abbreviations: $α=(⊲,αa¯)$, $β=(⊲,βb¯)$, and $γ=(⊲,γc¯)$. The labeling of the depicted section is δ γ2γ2β2 for the main spine and β η2 for the side spine (see Figure 3a). The corresponding runs of $A$ are $(〈q0,ω〉,〈q1,ω〉,〈q1′,υω〉,〈q2,υω〉)$ and $(〈p0,χ〉,〈p1′,ε〉)$.

Let us observe how the transitions of $A$ are simulated by $GA,L1$. The first transition (q0,ε,ε,q1) on the main spine does not modify the stack. It is implemented by replacing the last argument /(q0,ω,γ) by /(q1,ω,γ). The next transition $(q1,ε,υ,q1′)$ pushes the symbol υ to the stack. The argument /(q1,ω,γ) is thus replaced by $∖(q3,ω,α)/(q1′,υ,p1′)$. As the stack grows, an additional argument with the new state and stack symbol is added. The previous argument stores pop(υ) = q3 to ensure that we enter the correct state after popping υ. It also contains the previous unchanged stack symbol ω. The popping transition $(p0,χ,ε,p1′)$ on the side spine run is realized by removing /(p0,χ,β).

The third components are required to relabel the non-primary categories. At the bottom of the main spine, $c1=(⊥,ε,q3′)/(q0,ω,γ)$ is a primary category because q0 and $q3′$ are associated with the same spine generator s. Thus, c1 gets relabeled to τ′(q0). However, for c2 = (q0,ω,γ)/(q1,ω,γ) the spine generators of γ and of the output of q1 are different ($c¯$ and s). Hence it is a non-primary category and gets relabeled to γ.

Concerning the lexicon, c1 is a lexical category due to the fact that $(⊥,ε,q3′)∈⊤A$ can appear at the top of a spine as an initial category with $q3′∈F$ in its third component, while the appended (q0,ω,γ) represents an initial configuration of $A$. Similarly, c2 is a well-formed secondary category of a rule and the third component of its target is in L1. Therefore, it is an element of $⊤L1$, which is a subset of the lexicon.

Let us illustrate how the attachment of the side spine to the main spine is realized. The lexicon contains $(q1′,υ,p1′)∖(q2,υ,α)∖(p0,χ,β)$, of which the first two atoms are responsible for performing a transition on the main spine. This part cannot be modified since the rule system disallows it. The target stores the final state $p1′$ of the side spine run in its third component. The appended argument models the initial configuration of the side spine run starting in state p0 with χ on the stack.

Figure 6:

Part of a derivation tree of $GA,L1$ (see 23).

Figure 6:

Part of a derivation tree of $GA,L1$ (see 23).

Close modal

For the converse inclusion we utilize Theorem 20 of Kuhlmann et al. (2019). It states that for every CCG $G′$ there exists an sCFTG that generates the rule trees of $G′$. Whereas derivation trees are labeled by categories, rule trees are labeled by lexicon entries at leaves and by applied rules (instead of the output category) at inner nodes. Rule trees are a natural encoding of derivation trees using only a finite set of labels. As each rule indicates the target and last argument of its output category, rule trees can be relabeled in the same manner as derivation trees. For completeness’ sake we restate Definition 16 of Kuhlmann et al. (2019).

Definition 24.

Let $G=(Σ,A,R,I,L)$ be a CCG and T = TR,(ℒ(Σ)). A tree t ∈T is a rule tree if cat(t) ∈ I, where the partial map cat: T → C(A) is inductively defined by (i) cat(a) = a for all lexicon entries a ∈ℒ(Σ), (ii) $cat(axyax∖bby(t1,t2))=azy$ for all trees t1,t2 ∈T with cat(t1) = az/b and cat(t2) = by, and (iii) $cat(axybyax∖b(t1,t2))=azy$ for all t1,t2 ∈T with cat(t1) = by and cat(t2) = azb. The set of all rule trees of $G$ is denoted by $R(G)$.

We observe that any category relabeling can equivalently be applied to rule trees instead of derivation trees (because a category relabeling only depends on the target a and the last argument | b of a category ax | b). This yields the second main theorem.

Theorem 25.

CCGs and sCFTGs are strongly equivalent up to relabeling.

Kepser and Rogers (2011) proved that TAGs and sCFTGs are strongly equivalent, which shows that they are also strongly equivalent (up to relabeling) to CCGs.

Corollary 26.

CCGs and TAGs are strongly equivalent up to relabeling.

Clearly, from strong equivalence we can conclude weak equivalence as well (without the relabeling since the lexicon provides the relabeling). Weak equivalence was famously proven by Vijay-Shanker and Weir (1994), but Theorem 3 of Kuhlmann et al. (2015) shows that the original construction is incorrect. However, Weir (1988) provides an alternative construction and proof. Our contribution provides a stronger form (and proof) of this old equivalence result. It avoids the ε-entries that the original construction heavily relies on. An ε-entry is a category assigned to the empty string; these interspersed categories form the main building block in the original constructions. The necessity of these ε-entries (Vijay-Shanker and Weir, 1994) is an interesting and important question that naturally arises and has been asked by Kuhlmann et al. (2015). We settle this question and demonstrate that they can be avoided.

Corollary 27.

CCGs and TAGs are weakly equivalent, and CCGs withε-entries and CCGs generate the same (ε-free) languages.

The tree expressive power of CCGs with restricted rule degrees has already been investigated by Kuhlmann et al. (2019). It has been shown that 0-CCGs accept a proper subset of the regular tree languages (Gécseg and Steinby, 1997), whereas 1-CCGs accept exactly the regular tree languages. It remained open whether there is a k such that k-CCGs and (k + 1)-CCGs have the same expressive power. Our construction establishes that 2-CCGs are as expressive as k-CCGs for arbitrary k ≥ 2. Another consequence of our construction is that first-order categories are sufficient.

Corollary 28.

2-CCGs with first-order categories have the same expressive power ask-CCGs withk > 2.

We presented a translation from spine grammar to CCG. Due to the strong equivalence of spine grammar and TAG (Kepser and Rogers, 2011), we can also construct a strongly equivalent CCG for each TAG. Together with the translation from CCG to sCFTG (Kuhlmann et al., 2019), this proves the strong equivalence of TAG and CCG, which means that both formalisms generate the same derivation trees modulo relabelings. Our construction uses CCG rules of degree at most 2, only first-order categories, lexicon entries of arity at most 3, and no ε-entries in the lexicon. Such CCGs thus have full expressive power. Avoiding ε-entries is particularly interesting because they violate the Principle of Adjacency (Steedman, 2000, p. 54), which is a fundamental linguistic principle underlying CCG and requires that all combining categories correspond to phonologically realized counterparts in the input and are string-adjacent. Their elimination is performed by trimming them from the sCFTG obtained from a CCG with ε-entries and translating the trimmed sCFTG back to a CCG using our construction.

Translating CCG to sCFTG (Kuhlmann et al., 2019) yields sCFTGs whose size is exponential in a CCG-specific constant, which depends on the maximal rule degree and the maximal arity of lexicon entries. The increase can be attributed to variables in CCG rules, which need to be properly instantiated. Our construction increases the grammar size only polynomially, which can be verified for each step. Overall, a k-CCG can be converted to an equivalent 2-CCG without ε-entries in time and space exponential in k (and the maximal length of lexicon entries) and polynomial in the size of the grammar.

We would like to thank Mark Steedman and the three anonymous reviewers for their valuable and detailed comments, which greatly helped in improving the comprehensibility of this paper. The work of Lena Katharina Schiffer was funded by the German Research Foundation (DFG) Research Training Group GRK 1763 ‘Quantitative Logics and Automata’.

Jean-Michel
Autebert
,
Jean
Berstel
, and
Luc
Boasson
.
1997
.
Context-free languages and pushdown automata
,
Grzegorz
Rozenberg
and
Arto
Salomaa
, editors,
Handbook of Formal Languages
, volume
1
,
chapter 3
, pages
111
174
.
Springer
.
Jason
Baldridge
.
2002
.
Lexically Specified Derivational Control in Combinatory Categorial Grammar
. Ph.D. thesis,
University of Edinburgh
.
Yehoshua
Bar-Hillel
,
Haim
Gaifman
, and
Eli
Shamir
.
1960
.
On categorial and phrase-structure grammars
.
Bulletin of the Research Council of Israel
,
9F
(
1
):
1
16
.
Curry
,
Robert
Feys
, and
William
Craig
.
1958
.
Combinatory Logic
. Number
1
in
Studies in Logic and the Foundations of Mathematics
.
North-Holland
.
Normann
Decker
,
Martin
Leucker
, and
Daniel
Thoma
.
2013
.
Impartiality and anticipation for monitoring of visibly context-free properties
. In
Proc. Runtime Verification
,
volume 8174 of LNCS
, pages
183
200
.
Springer
.
Herbert
Fleischner
.
1977
.
On the equivalence of Mealy-type and Moore-type automata and a relation between reducibility and Moore-reducibility
.
Journal of Computer and System Sciences
,
14
(
1
):
1
16
.
Akio
Fujiyoshi
and
Takumi
Kasai
.
2000
.
Spinal- formed context-free tree grammars
.
Theory of Computing Systems
,
33
(
1
):
59
83
.
Ferenc
Gécseg
and
Magnus
Steinby
.
1997
.
Tree languages
. In
Grzegorz
Rozenberg
and
Arto
Salomaa
, editors,
Handbook of Formal Languages
, volume
3
,
chapter 1
, pages
1
68
.
Springer
.
Saul
Gorn
.
1965
.
Explicit definitions and linguistic dominoes
. In
Systems and Computer Science, Proceedings of the Conference held at Univ. of Western Ontario
, pages
77
115
.
Julia
Hockenmaier
and
Peter
Young
.
2008
.
Non-local scrambling: The equivalence of TAG and CCG revisited
. In
Proc. 9th TAG+
.
University of Tübingen
.
Aravind K.
Joshi
.
1985
.
Tree adjoining grammars: How much context-sensitivity is required to provide reasonable structural descriptions?
In
David R.
Dowty
,
Lauri
Karttunen
, and
Arnold M.
Zwicky
, editors,
Natual Language Parsing
,
chapter 6
, pages
206
250
.
Cambridge University Press
.
Stephan
Kepser
and
Jim
Rogers
.
2011
.
The equivalence of tree adjoining grammars and monadic linear context-free tree grammars
.
Journal of Logic, Language and Information
,
20
(
3
):
361
384
.
Alexander
Koller
and
Marco
Kuhlmann
.
2009
.
Dependency trees and the strong generative capacity of CCG
. In
Proc. 12th EACL
, pages
460
468
.
ACL
.
Marco
Kuhlmann
,
Alexander
Koller
, and
Giorgio
Satta
.
2010
.
The importance of rule restrictions in CCG
. In
Proc. Association for Computational Linguistics
, pages
534
543
.
ACL
.
Marco
Kuhlmann
,
Alexander
Koller
, and
Giorgio
Satta
.
2015
.
Lexicalization and generative power in CCG
.
Computational Linguistics
,
41
(
2
):
187
219
.
Marco
Kuhlmann
,
Andreas
Maletti
, and
Lena K.
Schiffer
.
2019
.
The tree-generative capacity of combinatory categorial grammars
. In
Proc. Foundations of Software Technology and Theoretical Computer Science
,
volume 150 of LIPIcs
, pages
44:1–44:14
.
Schloss Dagstuhl — Leibniz-Zentrum für Informatik
.
Marco
Kuhlmann
,
Giorgio
Satta
, and
Peter
Jonsson
.
2018
.
On the complexity of CCG parsing
.
Computational Linguistics
,
44
(
3
):
447
482
.
Marco
Kuhlmann
and
Giorgio
Satta
.
2012
.
Tree-adjoining grammars are not closed under strong lexicalization
.
Computational Linguistics
,
38
(
3
),
617
629
.
William C.
Rounds
.
1969
.
Context-free grammars on trees
. In
Proc. Symposium on Theory of Computing
, pages
143
148
.
ACM
.
Mark
Steedman
.
2000
.
The Syntactic Process
.
MIT Press
.
Mark
Steedman
and
Jason
Baldridge
.
2011
.
Combinatory categorial grammar
. In
Robert D.
Borsley
and
Kersti
Börjars
, editors,
Non-Transformational Syntax: Formal and Explicit Models of Grammar
,
chapter 5
, pages
181
224
.
Blackwell
.
Krishnamurti
Vijay-Shanker
.
1988
.
A Study of Tree Adjoining Grammars
. Ph.D. thesis,
University of Pennsylvania
.
Krishnamurti
Vijay-Shanker
and
David J.
Weir
.
1994
.
The equivalence of four extensions of context-free grammars
.
Mathematical Systems Theory
,
27
(
6
):
511
546
.
David J.
Weir
.
1988
.
Characterizing Mildly Context-sensitive Grammar Formalisms
. Ph.D. thesis,
University of Pennsylvania
.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode