Abstract
We describe a recognition algorithm for a subset of binary linear context-free rewriting systems (LCFRS) with running time O(nωd) where M(m) = O(mω) is the running time for m × m matrix multiplication and d is the “contact rank” of the LCFRS—the maximal number of combination and non-combination points that appear in the grammar rules. We also show that this algorithm can be used as a subroutine to obtain a recognition algorithm for general binary LCFRS with running time O(nωd+1). The currently best known ω is smaller than 2.38. Our result provides another proof for the best known result for parsing mildly context-sensitive formalisms such as combinatory categorial grammars, head grammars, linear indexed grammars, and tree-adjoining grammars, which can be parsed in time O(n4.76). It also shows that inversion transduction grammars can be parsed in time O(n5.76). In addition, binary LCFRS subsumes many other formalisms and types of grammars, for some of which we also improve the asymptotic complexity of parsing.
1. Introduction
The problem of grammar recognition is a decision problem of determining whether a string belongs to a language induced by a grammar. For context-free grammars (CFGs), recognition can be done using parsing algorithms such as the CKY algorithm (Kasami 1965; Younger 1967; Cocke and Schwartz 1970) or the Earley algorithm (Earley 1970). The asymptotic complexity of these chart-parsing algorithms is cubic in the length of the sentence.
In a major breakthrough, Valiant (1975) showed that context-free grammar recognition is no more complex than Boolean matrix multiplication for a matrix of size m × m where m is linear in the length of the sentence, n. With current state-of-the-art results in matrix multiplication, this means that CFG recognition can be done with an asymptotic complexity of O(n2.38).
In this article, we show that the problem of linear context-free rewriting system (LCFRS) recognition can also be reduced to Boolean matrix multiplication. Current chart-parsing algorithms for binary LCFRS have an asymptotic complexity of O(n3f), where f is the maximal fan-out of the grammar.1 Our algorithm takes time O(nωd), for a constant d which is a function of the grammar (and not the input string), and where the complexity of n × n matrix multiplication is M(n) = O(nω). The parameter d can be as small as f, meaning that we reduce parsing complexity from O(n3f) to O(nωf), and that, in general, the savings in the exponent is larger for more complex grammars.
LCFRS is a broad family of grammars. As such, we are able to support the findings of Rajasekaran and Yooseph (1998), who showed that tree-adjoining grammar (TAG) recognition can be done in time O(M(n2)) = O(n4.76) (TAG can be reduced to LCFRS with d = 2). As a result, combinatory categorial grammars, head grammars, and linear indexed grammars can be recognized in time O(M(n2)). In addition, we show that inversion transduction grammars (ITGs; Wu 1997) can be parsed in time O(nM(n2)) = O(n5.76), improving the best asymptotic complexity previously known for ITGs.
1.1 Matrix Multiplication State of the Art
Our algorithm reduces the problem of LCFRS parsing to Boolean matrix multiplication. Let M(n) be the complexity of multiplying two such n × n matrices. These matrices can be naïvely multiplied in O(n3) time by computing for each output cell the dot product between the corresponding row and column in the input matrices (each such product is an O(n) operation). Strassen (1969) discovered a way to do the same multiplication in O(n2.8704) time—his algorithm is a divide and conquer algorithm that eventually uses only seven operations (instead of eight) to multiply 2 × 2 matrices.
With this discovery, there have been many attempts to further reduce the complexity of matrix multiplication, relying on principles similar to Strassen's method: a reduction in the number of operations it takes to multiply sub-matrices of the original matrices to be multiplied. Coppersmith and Winograd (1987) identified an algorithm that has the asymptotic complexity of O(n2.375477). Others have slightly improved that algorithm, and currently there is an algorithm for matrix multiplication with M(n) = O(nω) such that ω = 2.3728639 (Le Gall 2014). It is known that M(n) = Ω(n2 log n) (Raz 2002).
Although the asymptotically best matrix multiplication algorithms have large constant factors lurking in the O-notation, Strassen's algorithm does not, and is widely used in practice. Benedí and Sánchez (2007) show speed improvement when parsing natural language sentences using Strassen's algorithm as the matrix multiplication subroutine for Valiant's algorithm for CFG parsing. This indicates that similar speed-ups may be possible in practice using our algorithm for LCFRS parsing.
1.2 Main Result
Our results focus on the asymptotic complexity as a function of string length. We do not give explicit grammar constants. For other work that focuses on reducing the grammar constant in parsing, see, for example, Eisner and Satta (1999), Dunlop, Bodenstab, and Roark (2010), and Cohen, Satta, and Collins (2013). For a discussion of the optimality of the grammar constants in Valiant's algorithm, see, for example, Abboud, Backurs, and Williams (2015).
2. Background and Notation
This section provides background on LCFRS, and establishes notation used in the remainder of the paper. A reference table of notation is also provided in Appendix A.
For an integer n, let [n] denote the set of integers {1, … , n}. Let [n]0 = [n] ∪ {0}. For a set X, we denote by X+ the set of all sequences of length 1 or more of elements from X.
A span is a pair of integers denoting left and right endpoints for a substring in a larger string. The endpoints are placed in the “spaces” between the symbols in a string. For example, the span (0, 3) spans the first three symbols in the string. For a string of length n, the set of potential endpoints is [n]0.
We turn now to give a succinct definition for binary LCFRS. For more details about LCFRS and their relationship to other grammar formalisms, see Kallmeyer (2010). A binary LCFRS is a tuple (, , , φ, S) such that:
- •
is the set of nonterminal symbols in the grammar.
- •
is the set of terminal symbols in the grammar. We assume .
- •
φ is a function specifying a fixed fan-out for each nonterminal (φ: ).
- •
is a set of productions. Each production p has the form A → g[B, C] where , and g is a composition function g : × , which specifies how to assemble the φ(B) + φ(C) spans of the right-hand side nonterminals into the φ(A) spans of the left-hand side nonterminal. We use square brackets as part of the syntax for writing productions, and parentheses to denote the application of the function g. The function g must be linear and non-erasing, which means that if g is applied on a pair of tuples of strings, then each input string appears exactly once in the output, possibly as a substring of one of the strings in the output tuple. Rules may also take the form A → g[], where g returns a constant tuple of one string from .
- •
is a start symbol. Without loss of generality, we assume φ(S) = 1.
- •
We define first the set yield(A) for every :
- •
For every , g() ∈ yield(A).
- •
For every and all tuples β ∈ yield(B), γ ∈ yield(C), g(β, γ) ∈ yield(A).
- •
Nothing else is in yield(A).
- •
- •
The string language of G is L(G) = {w | 〈w〉 ∈ yield(S)}.
We sometimes refer to the skeleton of a grammar rule A[α] → B[β] C[γ], which is just the context-free rule A → B C, omitting the variables. In that context, a logical statement such as is true if there is any rule with some α, β, and γ.
For our parsing algorithm, we assume that the grammar is in a normal form such that the variables β1 , … , βφ(B) appear in order in α, that is, that the spans of B are not re-ordered by the rule, and similarly we assume that γ1 , … , γφ(C) appear in order. If this is not the case in some rule, then the grammar can be transformed by introducing a new nonterminal for each permutation of a nonterminal that can be produced by the grammar. We further assume that α1,1 = β1, that is, that the first span of A begins with material produced by B rather than by C. If this not the case for some rule, B and C can be exchanged to satisfy this condition.
We refer to an LCFRS rule A → B C as single-initial if the leftmost endpoint of C is internal to a span of A, and dual-initial if the leftmost endpoint of C is the beginning of a span of A. Our algorithm will require the input LCFRS to be in single-initial form, meaning that all rules are single-initial. We note that grammars for common formalisms including TAG and synchronous context-free grammar (SCFG) are in this form. If a grammar is not in single-initial form, dual-initial rules can be converted to single-initial form by adding an empty span to B that combines with the first spans of C immediately to its left, as shown in Figure 2. Specifically, for each dual-initial rule A → B C, if the first span of C appears between spans i and i + 1 of B, create a new nonterminal B′ with φ(B′) = φ(B) + 1, and add a rule B′ → B, where B′ produces B along with a span of length zero between spans i and i + 1 of B. We then replace the rule A → B C with A → B′ C, where the new span of B′ combines with C immediately to the left of C's first span. Because the new nonterminal B′ has fan-out one greater than B, this grammar transformation can increase a grammar's fan-out by at most one.
By limiting ourselves to binary LCFRS grammars, we do not necessarily restrict the power of our results. Any LCFRS with arbitrary rank (i.e., with an arbitrary number of nonterminals in the right-hand side) can be converted to a binary LCFRS (with potentially a larger fan-out). See discussion in Section 7.6.
Consider the phenomenon of cross-serial dependencies that exists in certain languages. It has been used in the past (Shieber 1985) to argue that Swiss–German is not context-free. One can show that there is a homomorphism between Swiss–German and the alphabet {a, b, c, d} such that the image of the homomorphism intersected with the regular language a*b*c*d* gives the language L = {ambncmdn | m, n ≥ 1}. Because L is not context-free, this implies that Swiss-German is not context-free, because context-free languages are closed under intersection with regular languages.
In general, binary LCFRS are more expressive than TAGs because they can have nonterminals with fan-out greater than 2, and because they can interleave the arguments of the composition function in any order.
3. A Sketch of the Algorithm
Our algorithm for LCFRS string recognition is inspired by the algorithm of Valiant (1975). It introduces a few important novelties that make it possible to use matrix multiplication for the goal of LCFRS recognition.
The algorithm relies on the observation that it is possible to construct a matrix T with a specific non-associative multiplication and addition operator such that multiplying T by itself k times on the left or on the right yields k-step derivations for a given string. The row and column indices of the matrix together assemble a set of spans in the string (the fan-out of the grammar determines the number of spans). Each cell in the matrix keeps track of the nonterminals that can dominate these spans. Therefore, computing the transitive closure of this matrix yields in each matrix cell the set of nonterminals that can dominate the assembled indices' spans for the specific string at hand.
There are several key differences between Valiant's algorithm and our algorithm. Valiant's algorithm has a rather simple matrix-indexing scheme for the matrix: The rows correspond to the left endpoints of a span and the columns correspond to its right endpoints. Our matrix indexing scheme can mix both left endpoints and right endpoints at either the rows or the columns. This is necessary because with LCFRS, spans for the right-hand side of an LCFRS rule can combine in various ways into a new set of spans for the left-hand side.
In addition, our indexing scheme is “over-complete.” This means that different cells in the matrix T (or its matrix powers) are equivalent and should consist of the same nonterminals. The reason we need such an over-complete scheme is again because of the possible ways spans of a right-hand side can combine in an LCFRS. To address this over-completeness, we introduce into the multiplication operator a “copy operation” that copies nonterminals between cells in order to maintain the same set of nonterminals in equivalent cells.
For T1, for example, the fact that B appears for the pair of addresses (1, 8) (for row) and (2, 7) for column denotes that B spans the constituents (1, 2) and (7, 8) in the string (this is assumed to be true—in practice, it is the result of a previous step of matrix multiplication). Similarly, with T2, C spans the constituents (2, 4) and (5, 7).
Here, we copy the nonterminal A from the address with the row (1, 8) and column (4, 5) into the address with the row (1, 4) and column (5, 8). Both of these addresses correspond to the same spans (1, 4) and (5, 8). Note that matrix row and column addresses can mix both starting points of spans and ending points of spans.
4. A Matrix Multiplication Algorithm for LCFRS
We turn next to give a description of the algorithm. Our description is constructed as follows:
- •
In Section 4.1 we describe the basic matrix structure used for LCFRS recognition. This construction depends on a parameter d, the contact rank, which is a function of the underlying LCFRS grammar we parse with. We also describe how to create a seed matrix, for which we need to compute the transitive closure.
- •
In Section 4.2 we define the multiplication operator between cells of the matrices we use. This multiplication operator is distributive, but not associative, and as such we use Valiant's specialized transitive closure algorithm to compute transitive closure of the seed matrix given a string.
- •
In Section 4.3 we define the contact rank parameter d. The smaller d is, the more efficient it is to parse with the specific grammar.
- •
In Section 4.4 we define when a binary LCFRS is “balanced.” This is an end case that increases the final complexity of our algorithm by a factor of O(n). Nevertheless, it is an important end case that appears in applications, such as inversion transduction grammars.
- •
In Section 4.5 we tie things together, and show that computing the transitive closure of the seed matrix we define in Section 4.1 yields a recognition algorithm for LCFRS.
4.1 Matrix Structure
In the following, it will be safe to assume sequences from N(d) are monotonically increasing in their indices. For an i ∈ N(d), we overload notation, and often refer to the set of all elements in the first coordinate of each element in the sequence (ignoring the additional bits). As such,
- •
The set i ∪ j is defined for j ∈ N(d).
- •
If we state that i is in N(d) and includes a set of endpoints, it means that i is a sequence of these integers (ordered lexicographically) with the bit part determined as explained in the context (for example, all unmarked).
- •
The quantity |i| denotes the length of the sequence.
- •
The quantity min i denotes the smallest index among the first coordinates of all elements in the sequence i (ignoring the additional bits).
We emphasize that the variables i, j, and k are mostly elements in N(d) as overloaded above, not integers, throughout this article; we choose the symbols i, j, and k by analogy to the variables in the CKY parsing algorithm, and also because we use the sequences as addresses for matrix rows and columns. For i, j ∈ N(d), we define m(i, j) to be the set of pairs {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2f′ −1, ℓ2f′)} such that ℓk < ℓk+1 for k ∈ [2f′ − 1] and (ℓk, 0) ∈ i ∪ j for k ∈ [2f′]. This means that m(i, j) takes as input the two sequences in matrix indices, merges them, sorts them, then divides this sorted list into a set of f′ consecutive pairs. Whenever min j ≤ min i, m(i, j) is undefined. The interpretation of this is that ℓ1 should always belong to i and not j. See more details in Section 4.2. In addition, if any element of i or j is marked, m(i, j) is undefined.
We define an order < on elements i and j of N(d) by first sorting the sequences i and j and then comparing i and j lexicographically (ignoring the bits). This ensures that i < j if min i < min j. We assume that the rows and columns of our matrices are arranged in this order. For the rest of the discussion, we assume that d is a constant, and refer to T(d) as T and N(d) as N.
The intuition behind matrices of the type of T (meaning T and, as we see later, products of T with itself, or its transitive closure) is that each cell indexed by (i, j) in such a matrix consists of all nonterminals that can be generated by the grammar when parsing a sentence such that these nonterminals span the constituents m(i, j) (whenever m(i, j) is defined). Our normal form for LCFRS ensures that spans of a nonterminal are never re-ordered, meaning that it is not necessary to retain information about which indices demarcate which components of the nonterminal, because one can sort the indices and take the first two indices as delimiting the first span, the second two indices as delimiting the second span, and so on. The two additional N elements in each triplet in a cell are actually just copies of the row and column indices of that cell. As such, they are identical for all triplets in that cell. The additional , , , , , symbols are symbols that indicate to the matrix multiplication operator that a “copying operation” should happen between equivalent cells (Section 4.2).
Figure 4 gives an algorithm to seed the initial matrix T. Entries added in Step 2 of the algorithm correspond to entries in the LCFRS parsing chart that can be derived immediately from terminals in the string. Entries added in Step 3 of the algorithm do not depend on the input string or input grammar, but rather initialize elements used in the copy operation described in detail in Section 4.2. Because the algorithm only initializes entries with i < j, the matrix T is guaranteed to be upper triangular, a fact which we will take advantage of in Section 4.2.
4.1.1 Configurations
Our matrix representation requires that a nonterminal appears in more than one equivalent cell in the matrix, and the specific set of cells required depends on the specific patterns in which spans are combined in the LCFRS grammar. We now present a precise description of these cells by defining the configuration of a nonterminal in a rule. The concept of a configuration is designed to represent which endpoints of spans of the rule's right-hand side (r.h.s.) nonterminals B and C meet one another to produce larger spans, and which endpoints, on the other hand, become endpoints of spans of the left-hand side (l.h.s.) nonterminal A.
4.2 Definition of Multiplication Operator
In addition, whenever R = ∅, then for any S, R ⊗ S = S ⊗ R = ∅. This property maintains the upper-triangularity of the transitive closure of T.
Figure 5 gives the algorithm for multiplying two elements of the matrix. The algorithm is composed of two components. The first component (Step 2 in Figure 5) adds nonterminals, for example, A, to cell (i, j), if there is some B and C in (i, k) and (k, j), respectively, such that there exists a rule A → B C and the span endpoints denoted by k are the points where the rule specifies that spans of B and C should meet.
In order to make this first component valid, we have to make sure that k can indeed serve as a concatenation point for (i, j). Step 2 verifies this using the concept of configurations defined earlier. To apply a rule r : A[α] → B[β] C[γ], we must have an entry for (B, i, k) in cell (i, k), where i is a set of indices corresponding to the endpoints of B selected by config2 (r) and k is a set of indices corresponding to the endpoints of B selected by [2φ(B)] \ config2 (r). This condition is enforced by Step 2c of Figure 5. Similarly, we must have an entry for (C, k, j) in cell (k, j), where k is a set of indices corresponding to the endpoints of C selected by config3 (r) and j is a set of indices corresponding to the endpoints of C selected by [2φ(C)] \ config3 (r). This is enforced by Step 2d. Finally, the spans defined by B and C must not overlap in the string. To guarantee that the spans do not overlap, we sort the endpoints of A and check that each position in the sorted list is derived from either B or C as required by the configuration of A in r. This check is performed in Step 2e of Figure 5.
Given that T is initialized to be upper-triangular, the properties of matrix multiplication guarantee that all matrix powers of T are upper-triangular. We now proceed to show that upper-triangular matrices are sufficient in terms of the grammar. In particular, we need to show the following lemma:
For each application of a single-initial rule A → B C, it is possible to create an entry for A by multiplying two upper-triangular matrices T1 and T2, where T1 contains an entry for B, and T2 contains an entry for C.
A nonterminal B appears in a cell above the diagonal if its row address is smaller than its column address, which in turn occurs if the leftmost endpoint of B appears in the row address rather than the column address. The row address for B contains the endpoints of B that are also endpoints of A. Our normal form for LCFRS rules ensures that the leftmost endpoint of B forms the leftmost endpoint of A. Therefore the leftmost endpoint of B is in B's row address, and B is above the diagonal.
The row address of nonterminal C in T2 must contain the endpoints of C that combine with endpoints of B. For single-initial rules, these endpoints include the leftmost endpoint of C, guaranteeing that C appears above the diagonal.
Because each instance of A can be produced by combining elements of T1 and T2 that are above the diagonal, each instance of A can be produced by multiplying two upper-triangular matrices. ∎
4.2.1 Copy Operations
The first component of the algorithm is sound, but not complete. If we were to use just this component in the algorithm, then we would obtain in each cell (i, j) of the transitive closure of T a subset of the possible nonterminals that can span m(i, j). The reason this happens is that our addressing scheme is “over-complete.” This means that any pair of addresses (i, j) and (k, ℓ) are equivalent if m(i, j) = m(k, ℓ).
We need to ensure that the transitive closure, using ⊗, propagates, or copies, non-terminals from one cell to its equivalents. This is done by the second component of the algorithm, in Steps 3–6. The algorithm does this kind of copying by using a set of six special “copy” symbols, {, , , , , }. These symbols copy nonterminals from one cell to the other in multiple stages.
Suppose that we need to copy a nonterminal from cell (i, j) to cell (k, ℓ), where m(i, j) = m(k, ℓ), indicating that the two cells describe the same set of indices in the input string. We must move the indices in i ∩ ℓ from the row address to the column address, and we must move the indices in j ∩ k from the column address to the row address. We will move one index at a time, adding nonterminals to intermediate cells along the way.
We now illustrate how our operations move a single index from a row address to a column address (moving from column to row is similar). Let x indicate the index we wish to move, meaning that we wish to copy a nonterminal in cell (i, j) to cell (remove(i, x), insert(j, x)). Because we want our overall parsing algorithm to take advantage of fast matrix multiplication, we accomplish the copy operations through a sequence of three matrix multiplications, as shown in Figure 6. The first multiplication involves the nonterminal A in cell (i, j) in the left matrix, and a symbol in cell (j, insert(j, )) in the right matrix, resulting in a matrix with nonterminal A in cell (i, insert(j, )). This intermediate result is redundant in the sense that index x appears in the row and index appears in the column address. To remove x from the row address, we multiply on the left with a matrix containing the symbol in cell (remove(i, x), i), resulting in a matrix with nonterminal A in cell (remove(i, x), insert(j, )). Finally, we multiply by a third matrix to replace the marked index with the unmarked index x. This is done by multiplying on the right with a matrix containing the symbol in cell (insert(j, ), insert(j, x)).
The key idea behind this three-step process is to copy elements from one cell to another through intermediate cells. In matrix multiplication, only cells that share a row or a column index actually interact when doing multiplication. Therefore, in order to copy a nonterminal from (i, j) to another cell which represents the same set of spans, we have to copy it through cells such as (i, insert(j, x)) that share the index i with (i, j).
In order to guarantee that our operations copy nonterminals only into cells with equivalent addresses, the seed matrix contains the special symbol only in cells (j, k) such that k = insert(j, ) for some x. When in cell (j, k) combines with a nonterminal A in cell (i, j), the result contains A only if x ∈ i, guaranteeing that the index added to the column address was originally present in the row address. In addition, the condition that i contains only unmarked indices (in the multiplication operator) and that the condition j contains only unmarked indices (in the initialization of the seed matrix) guarantee that only one index is marked in the address of any non-empty matrix cell.
Similar conditions apply to the operation. The seed matrix contains only in cells (i, k) such that i = remove(k, x) for some x, guaranteeing that the operation only removes one index at a time. Furthermore, when in cell (i, k) combines with a nonterminal A in cell (k, j), the result contains A only if . This guarantees that the new entry includes all the original indices, meaning that any index we remove from the row address is still present as a marked index in the column address.
The operator removes the mark on index in the column address, completing the entire copying process. The condition |i ∪ j| = φ(A) ensures that the removal of the mark from does not take place until after x has been removed from the row address.
Taken together, these conditions ensure that after a sequence of one , one , and one , A is copied into all cells having the form (remove(i, x), insert(j, x)) for some x.
To move an index from the column address to the row address, we use one operation followed by one operation and one operation. The conditions on these three special symbols are analogous to the conditions on , , and outlined earlier, and ensure that we copy from cell (i, j) to cells of the form (insert(i, x), remove(j, x)) for some x.
We now show that matrix powers of the upper-triangular seed matrix T copy nonterminals between all equivalent cells above the diagonal.
Let (i, j) and (k, ℓ) be unmarked matrix addresses, in a seed matrix T indexed by row and column addresses from N(d) where d > min{|i|, |j|} and d > min{|k|, |ℓ|}. Assume that min i = min k and either k = remove(i, x) and ℓ = insert(j, x) for some x, or k = insert(i, x) and ℓ = remove(j, x) for some x. If A appears in cell (i, j) of T(n), then A appears in cell (k, ℓ) of T(n+3). Furthermore, the copy operations do not introduce nonterminals into any other cells with unmarked addresses.
The condition on d guarantees that we can form row and column addresses long enough to hold the redundant representations with one address shared between row and column. This condition is only relevant in the case where i, j, k, and ℓ are all of the same length; in this case we need to construct temporary indices with length one greater, as in the example in Figure 6.
A can be added to cell (k, ℓ) through a sequence of three matrix multiplications by combining with symbols , , and or with , , and . Because T(n) is upper triangular, min i = min i ∪ j, meaning that A's leftmost index is in its row address. The condition min i = min k implies that we are not moving this leftmost index from row to column. The addresses of the three copy symbols required are all formed by adding or removing x or to the row and column addresses (i, j); because the leftmost index of i is not modified, the copy symbols that are required are all above the diagonal, and are present in the seed matrix T. Therefore, A appears in cell (k, ℓ) of T(n+3).
To see that nonterminals are not introduced into any other cells, observe that and are the only symbols that introduce nonterminals into unmarked addresses. They can only apply when a marked index is present, and when the total number indices is 2φ(A). This can only occur after either has introduced a marked index and removed the corresponding unmarked index, or has introduced a marked index and removed the corresponding unmarked index. ∎
Putting together sequences of these operations to move indices, we arrive at the following lemma:
Let (i, j) and (k, ℓ) be matrix addresses such that m(i, j) = m(k, ℓ), in a seed matrix T indexed by row and column addresses from N(d) where d > min{|i|, |j|} and d > min{|k|, |ℓ|}. Then, for any nonterminal A in cell (i, j) in T(n), A will also appear in cell (k, ℓ) of the power matrix T(n+6d).
Nonterminal A can be copied through a series of intermediate cells by moving one index at a time from i to ℓ, and from j to k. We begin by moving indices from either the row address i to the column address if |i| > |j|, or from the column address j to the row address otherwise. We must move up to d indices from row to column, and d indices from column to row. Each move takes three matrix multiplications, for a total of 6d matrix multiplications. ∎
4.3 Determining the Contact Rank
The dimensions of the matrix T (and its transitive closure) are |N| × |N|. The set N is of size O(nd), where d is a function of the grammar. When a given pair of cells in two matrices of the type of T are multiplied, we are essentially combining endpoints from the first multiplicand column address with endpoints from the second multiplicand row address. As such, we have to ensure that d allows us to generate all possible sequences of endpoints that could potentially combine with a given fixed LCFRS.
Note that δ(r) depends only on the skeleton of r (see Section 2), and therefore it can be denoted by δ(A → B C).3
We require our grammars to be in single-initial form, as described in Section 2. Because the process of converting an LCFRS grammar to single-initial form increases its fan-out by at most one, the contact rank is also increased by at most one.
4.4 Balanced Grammars
A configuration c of nonterminal B is balanced if |c| = φ(B). This means that the number of contact points and non-contact points are the same.
The contact rank d defined in the previous section is the maximum size of any configuration of any nonterminal in any rule. For a given nonterminal B, if φ(B) < d, then we can copy entries between equivalent cells. To see this, suppose that we are moving from cell (i, j) to (k, ℓ) where the length of i is greater than the length of j. As long as we move the first index from row to column, rather than from column to row, the intermediate results will require addresses no longer than the length of i.
We define a balanced grammar to be a grammar containing a nonterminal B such that φ(B) = d, and |config(B)| > 1. As examples, a CFG is not balanced because, although for each nonterminal B, φ(B) = d = 1, the number of configurations |config(B)| is 1. Similarly, TAG is not balanced, because each nonterminal has only one configuration. ITGs are balanced, because, for each nonterminal B, φ(B) = d = 2, and nonterminals can be used in two configurations, corresponding to straight and inverted rules.
The following condition will determine which of two alternative methods we use for the top level of our parsing algorithm.
Unbalanced Grammar Condition There is no nonterminal B such that φ(B) = d and |config(B)| > 1.
This condition guarantees that we can move nonterminals as necessary with matrix multiplication:
Let (i, j) and (k, ℓ) be matrix addresses such that m(i, j) = m(k, ℓ). Under Condition 4.1, for any nonterminal A in cell (i, j) in T(n), A will also appear in cell (k, ℓ) of the power matrix T(n+6d).
The number of A's endpoints is 2φ(A) = |i| + |j| = |k| + |ℓ|. If the grammar is not balanced, then d > φ(A), and therefore d > min{|i|, |j|} and d > min{|k|, |ℓ|}. By Lemma 3, A will appear in cell (k, ℓ) of the power matrix T(n+6d). ∎
4.5 Computing the Transitive Closure of T
Under Condition 4.1, one can show that given an LCFRS derivation tree t over the input string, each node in t must appear in the transitive closure matrix T+. Specifically, for each node in t representing nonterminal A spanning endpoints {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2φ(A)−1, ℓ2φ(A))}, at each cell in the matrix such that m(i, j) = {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2φ(A)−1, ℓ2φ(A))}, contains A. This leads to the following result:
Under Condition 4.1, the transitive closure of T is such that [T+]ij represents the set of nonterminals that are derivable for the given spans in m(i, j).
The proof is by induction over the length of the LCFRS derivations. By Lemma 1, derivations consisting of a single rule A[α] → B[β] C[γ] produce A ∈ T(2) for i and j corresponding the non-combination points of B and C. For all other i and j such that m(i, j) = {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2φ(A)−1, ℓ2φ(A))}, an entry is produced in by Lemma 4. By induction, Ts(6d+2) contains entries for all LCFRS derivations of depth s, and T+ contains entries for all LCFRS derivations of any length.
In the other direction, we need to show that all entries A in T+ correspond to a valid LCFRS derivation of nonterminal A spanning endpoints m(i, j). This can be shown by induction over the number of matrix multiplications. During each multiplication, entries created in the product matrix correspond either to the application of an LCFRS rule with l.h.s. A, or to the movement of an index between row and column address for a previously recognized instance of A. ∎
The transitive closure still yields a useful result, even when Condition 4.1 does not hold. To show how it is useful, we need to define the “copying” operator, Π, which takes a matrix T′ of the same type of T, and sets Π (T′) using the following procedure:
- 1.
Define e(i, j) = {(i′, j′) | m(i′, j′) = m(i, j)}—that is, the set of equivalent configurations to (i, j).
- 2.
Set .
This means that Π takes a completion step, and copies all nonterminals between all equivalent addresses in T′. Note that the Π operator can be implemented such that it operates in time O(nd). All it requires is taking O(nd) unions of sets (corresponding to the sets of nonterminals in the matrix cells), where each set is of size O(1) with respect to the sentence length (i.e., the size is only a function of the grammar), and each union is over O(1) sets.
This procedure leads to a recognition algorithm for binary LCFRS that do not satisfy Condition 4.1 (we also assume that these binary LCFRS would not have unary cycles or ϵ rules). This algorithm is given later, in Figure 9. It operates by iterating through transitive closure steps and copying steps until convergence. When we take the transitive closure of T, we are essentially computing a subset of the derivable nonterminals. Then, the copying step (with Π) propagates nonterminals through equivalent cells. Now, if we take the transitive closure again, and there is any way to derive new nonterminals because of the copying step, the resulting matrix will have at least one new nonterminal. Otherwise, it will not change, and as such, we recognized all possible derivable nonterminals in each cell.
For any single-initial LCFRS, when Step 2 of the algorithm in Figure 9 converges, T is such that [T]ij represents the set of nonterminals that are derivable for the given spans in m(i, j).
Any LCFRS derivation of a nonterminal can be decomposed into a sequence of rule applications and copy operations, and by induction over the length of the derivation, all derivations will be found. Each matrix operation only produces derivable LCFRS nonterminals, and by induction over the number of steps of the algorithm, only derivable nonterminals will be found. ∎
4.5.1 Reduction of Transitive Closure to Boolean Matrix Multiplication
Valiant (1975) showed that his algorithm for computing the multiplication of two matrices, in terms of a multiplication operator similar to ours, can be reduced to the problem of Boolean matrix multiplication. His transitive closure algorithm requires as a black box this two-matrix multiplication algorithm.
We follow here a similar argument. We can use Valiant's algorithm for the computation of the transitive closure, because our multiplication operator is distributive (with respect to ∪). To complete our argument, we need to show, similarly to Valiant, that the product of two matrices using our multiplication operator can be reduced to Boolean matrix multiplication (Figure 7).
Consider the problem of multiplication a matrix T1 and T2, and say T1 ⊗ T2 = T3. To reduce it to Boolean matrix multiplication, we create pairs of matrices, Gr and Hr, where r ranges over . The size of Gr and Hr is N × N. If r = A[α] → B[β]C[γ], we set [Gr]ik to be 1 if the nonterminal B appears in [T1]ik and B, i, and k meet the conditions of Step 2c of Figure 5. Similarly, we set [Hr]kj to be 1 if the nonterminal C appears in [T2]kj and C, k, and j meet the conditions of Step 2d. All other cells, in both Gr and Hr, are set to 0. Note that Gr and Hr for all are upper triangular Boolean matrices.
In addition, we create pairs of matrices, GA and HA, where A ranges over the set of nonterminals . We set [GA]ik to be 1 if the nonterminal A appears in [T1]ik, regardless the conditions of Step 2c of Figure 5. Similarly, we set [HA]kj to be 1 if the nonterminal A appears in [T2]kj, regardless of the conditions of Step 2d. All other cells, in both GA and HA, are set to 0. Again, GA and HA for all are upper triangular Boolean matrices.
Finally, we create six additional matrices, for each element in the set {, , , , , }. These matrices indicate the positions in which each symbol appears in the seed matrix T defined in Figure 4:
- 1.
, for which only if (, i, j) ∈ T.
- 2.
, for which only if (, i, j) ∈ T.
- 3.
, for which only if (, i, j) ∈ T.
- 4.
, for which only if (, i, j) ∈ T.
- 5.
, for which only if (, i, j) ∈ T.
- 6.
, for which only if (, i, j) ∈ T.
Now, for each rule , we compute the matrix Ir = GrHr. The total number of matrix multiplications required is , which is constant in n. Now, T3 can be obtained by multiplying these matrices, and applying the conditions of Figure 5:
- 1.
For each , for each rule r = A → B C, check whether [Ir]ij = 1. If Step 2e is satisfied for A, i, and j, then add (A, i, j) to [T3]ij.
- 2.
For each , compute . For each (i, j), add A to [T3]ij if and x ∈ i for some x, and [JA]ij = 1.
- 3.
For each , compute . For each (i, j), add A to [T3]ij and x ∉ i and for some x, and [JA]ij = 1.
- 4.
For each , compute . For each (i, j), add A to [T3]ij if |i ∪ j| = 2φ(A), and [JA]ij = 1.
- 5.
For each , compute . For each (i, j), add A to [T3]ij if and x ∈ j for some x, and [JA]ij = 1.
- 6.
For each , compute . For each (i, j), add A to [T3]ij and x ∉ j for some x, and [JA]ij = 1.
- 7.
For each , compute . For each (i, j), add A to [T3]ij if |i ∪ j| = 2φ(A), and [JA]ij = 1.
The matrix product operation for two matrices of size (2n)d × (2n)d can be computed in time O(nωd), if two m × m Boolean matrices can be multiplied in time O(mω).
The result of the algorithm above is guaranteed to be the same as the result of matrix multiplication using the ⊗ operation of Figure 5 because it considers all combinations of i, j, and k and all pairs of nonterminals and copy symbols, and applies the same set of conditions. This is possible because each of the conditions in Figure 5 applies either to a pair (i, k) or (k, j), in which case we apply the condition to input matrices to the Boolean matrix multiplication, or to the pair (i, j), in which case we apply the condition to the result of the Boolean matrix multiplication. Crucially, no condition in Figure 5 involves i, j, and k simultaneously.
The Boolean matrix algorithm takes time O(nωd) for each matrix multiplication, while the pre- and post-processing steps for each matrix multiplication take only O(n2d). The number of Boolean matrix multiplications depends on the grammar, but is constant with respect to n, yielding an overall runtime of O(nωd). ∎
The final parsing algorithm is given in Figure 8. It works by computing the seed matrix T, and then finding its transitive closure. Finally, it checks whether the start symbol appears in a cell with an address that spans the whole string. If so, the string is in the language of the grammar.
5. Computational Complexity Analysis
As mentioned in the previous section, the algorithm in Figure 8 finds the transitive closure of a matrix under our definition of matrix multiplication. The operations ∪ and ⊗ used in our matrix multiplication distribute. The ⊗ operator takes the cross product of two sets, and applies a filtering condition to the results; the fact that (x ⊗ y) ∪ (x ⊗ z) = x ⊗ (y ∪ x) follows from the fact that it does not matter whether we take the cross product of the union, or the union of the cross product. However, unlike in the case of standard matrix multiplication, our ⊗ operation is not associative. In general, x ⊗ (y ⊗ z) ≠ (x ⊗ y) ⊗ z, because the combination of y and z may be allowed by the LCFRS grammar, whereas the combination of x and y is not.
The transitive closure of a matrix of size (2n)d × (2n)d can be computed in time O(nωd), if 2 < ω < 3, and two m × m Boolean matrices can be multiplied in time O(mω).
We can use the algorithm of Valiant for finding the closure of upper triangular matrices under distributive, non-associative matrix multiplication. Because we can perform one matrix product in time O(nωd) by Lemma 7, the algorithm of Valiant (1975, Theorem 2) can be used to compute transitive closure also in time O(nωd). ∎
When Valiant's paper was published, the best well-known algorithm known for such multiplication was Strassen's algorithm, with M(n) = O(n2.8704). Since then, it has been found that M(n) = O(nω) for ω < 2.38 (see also Section 1). There are ongoing attempts to further reduce ω, or find lower bounds for M(n).
The algorithm for transitive closure gives one of the main results of this article:
A single-initial binary LCFRS meeting Condition 4.1 can be parsed in time O(nωd), where d is the contact rank of the grammar, 2 < ω < 3, and two m × m Boolean matrices can be multiplied in time O(mω).
By Lemma 8, Step 2 of the algorithm in Figure 8 takes O(nωd). By Lemma 5, the result of Step 2 gives all nonterminals that are derivable for the given spans in m(i, j). ∎
Parsing a binary LCFRS rule with standard chart parsing techniques requires time O(nφ(A)+φ(B)+φ(C)). Let . The worst-case complexity of LCFRS chart parsing techniques is O(np). We can now ask the question: In which case the algorithm in Figure 8 is asymptotically more efficient than standard chart parsing techniques with respect to n? That is, in which cases is ndω = o(np)?
We turn now to analyze the complexity of the algorithm in Figure 9, giving the main result of this article for arbitrary LCFRS:
A single-initial binary LCFRS can be parsed in time O(nωd+1), where d is the contact rank of the grammar, 2 < ω < 3, and two m × m Boolean matrices can be multiplied in time O(mω).
The algorithm of Figure 9 works by iteratively applying the transitive closure and the copying operator until convergence. At convergence, we have recognized all derivable nonterminals by Lemma 6. Each transitive closure has the asymptotic complexity of O(nωd) by Lemma 8. Each Π application has the asymptotic complexity of O(nd). As such, the total complexity is O(tnωd), where t is the number of iterations required to converge. At each iteration, we discover at least one new nonterminal. The total number of nodes in the derivation for the recognized string is O(n) (assuming no unary cycles or ϵ rules). As such t = O(n), and the total complexity of this algorithm is O(nωd+1). ∎
6. Applications
Our algorithm is a recognition algorithm that is applicable to binary LCFRS. As such, our algorithm can be applied to any LCFRS, by first reducing it to a binary LCFRS. We discuss results for specific classes of LCFRS in this section, and return to the general binarization process in Section 7.6.
LCFRS subsumes context-free grammars, which was the formalism that Valiant (1975) focused on. Valiant showed that the problem of CFG recognition can be reduced to the problem of matrix multiplication, and, as such, the complexity of CFG recognition in that case is O(nω). Our result generalizes Valiant's result. CFGs (in Chomsky Normal Form) can be reduced to a binary LCFRS with f = 1. As such, d = 1 for CFGs, and our algorithm yields a complexity of O(nω). (Note that CFGs satisfy Condition 4.1, and therefore we can use a single transitive closure step.)
LCFRS is a broad family of grammars, and it subsumes many other well-known grammar formalisms, some of which were discovered or developed independently of LCFRS. Two such formalisms are tree-adjoining grammars (Joshi and Schabes 1997) and synchronous context-free grammars. In the next two sections, we explain how our algorithmic result applies to these two formalisms.
6.1 Mildly Context-Sensitive Language Recognition
Linear context-free rewriting systems fall under the realm of mildly context-sensitive grammar formalisms. They subsume four important mildly context-sensitive formalisms that were developed independently and later shown to be weakly equivalent by Vijay-Shanker and Weir (1994): tree-adjoining grammars (Joshi and Schabes 1997), linear indexed grammars (Gazdar 1988), head grammars (Pollard 1984), and combinatory categorial grammars (Steedman 2000). Weak equivalence here refers to the idea that any language generated by a grammar in one of these formalisms can be also generated by some grammar in any of the other formalisms among the four. It can be verified that all of these formalisms are unbalanced, single-initial LCFRSs, and as such, the algorithm in Figure 8 applies to them.
Rajasekaran and Yooseph (1998) show that tree-adjoining grammars can be parsed with an asymptotic complexity of O(M(n2)) = O(n4.76). Although they did not discuss that, the weak equivalence between the four formalisms mentioned here implies that all of them can be parsed in time O(M(n2)). Our algorithm generalizes this result. We now give the details.
Our starting point for this discussion is head grammars. Head grammars are a specific case of linear context-free rewriting systems, not just in the formal languages they define—but also in the way these grammars are described. They are described using concatenation production rules and wrapping production rules, which are directly transferable to LCFRS notation. Their fan-out is 2. We focus in this discussion on “binary head grammars,” defined analogously to binary LCFRS—the rank of all production rules has to be 2. The contact rank of binary head grammars is 2. As such, our work shows that the complexity of recognizing binary head grammar languages is O(M(n2)) = O(n4.76).
Vijay-Shanker and Weir (1994) show that linear indexed grammars (LIGs) can actually be reduced to binary head grammars. Linear indexed grammars are extensions of CFGs, a linguistically motivated restricted version of indexed grammars, the latter of which were developed by Aho (1968) for the goal of handling variable binding in programming languages. The main difference between LIGs and CFGs is that the nonterminals carry a “stack,” with a separate set of stack symbols. Production rules with LIGs copy the stack on the left-hand side to one of the nonterminal stacks in the right-hand side,5 potentially pushing or popping one symbol in the new copy of the stack. For our discussion, the main important detail about the reduction of LIGs to head grammars is that it preserves the rank of the production rules. As such, our work shows that binary LIGs can also be recognized in time O(n4.76).
Vijay-Shanker and Weir (1994) additionally address the issue of reducing combinatory categorial grammars to LIGs. The combinators they allow are function application and function composition. The key detail here is that their reduction of CCG is to an LIG with rank 2, and, as such, our algorithm applies to CCGs as well, which can be recognized in time O(n4.76).
Finally, Vijay-Shanker and Weir (1994) reduced tree-adjoining grammars to combinatory categorial grammars. The TAGs they tackle are in “normal form,” such that the auxiliary trees are binary (all TAGs can be reduced to normal form TAGs). Such TAGs can be converted to weakly equivalent CCG (but not necessarily strongly equivalent), and as such, our algorithm applies to TAGs as well. As mentioned earlier, this finding supports the finding of Rajasekaran and Yooseph (1998), who show that TAG can be recognized in time O(M(n2)).
For an earlier discussion connections between TAG parsing and Boolean matrix multiplication, see Satta (1994).
6.2 Synchronous Context-Free Grammars
SCFGs are widely used in machine translation to model the simultaneous derivation of translationally equivalent strings in two natural languages, and are equivalent to the syntax-directed translation schemata of Aho and Ullman (1969). SCFGs are a subclass of LCFRS where each nonterminal has fan-out 2: one span in one language and one span in the other. Because the first span of the l.h.s. nonterminal always contains spans from both r.h.s. nonterminals, SCFGs are always single-initial. Binary SCFGs, also known as ITGs, have no more than two nonterminals on the r.h.s. of a rule, and are the most widely used model in syntax-based statistical machine translation.
Synchronous parsing with traditional tabular methods for ITG is O(n6), as each of the three nonterminals in a rule has fan-out of two. ITGs, unfortunately, do not satisfy Condition 4.1, and therefore we have to use the algorithm in Figure 9. Still, just like with TAG, each rule combines two nonterminals of fan-out 2 using two combination points. Thus, d = 2, and we achieve a bound of O(n2ω+1) for ITG, which is O(n5.76) using the current state of the art for matrix multiplication.
We achieve even greater gains for the case of multi-language synchronous parsing. Generalizing ITG to allow two nonterminals on the right-hand side of a rule in each of k languages, we have an LCFRS with fan-out k. Traditional tabular parsing has an asymptotic complexity of O(n3k), whereas our algorithm has the complexity of O(nωk+1).
Another interesting case of a synchronous formalism that our algorithm improves the best well-known result for is that of binary synchronous TAGs (Shieber and Schabes 1990)—that is, a TAG in which all auxiliary trees are binary. This formalism can be reduced to a binary LCFRS. A tabular algorithm for such grammar has the asymptotic complexity of O(n12). With our algorithm, d = 4 for this formalism, and as such its asymptotic complexity in that case is O(n9.52).
7. Discussion and Open Problems
In this section, we discuss some extensions to our algorithm and open problems.
7.1 Turning Recognition into Parsing
The algorithm we presented focuses on recognition: Given a string and a grammar, it can decide whether the string is in the language of the grammar or not. From an application perspective, perhaps a more interesting algorithm is one that returns an actual derivation tree, if it identifies that the string is in the language.
It is not difficult to adapt our algorithm to return such a parse, without changing the asymptotic complexity of O(nωd+1). Once the transitive closure of T is computed, we can backtrack to find such a parse, starting with the start symbol in a cell spanning the whole string. When we are in a specific cell, we check all possible combination points (there are d of those) and nonterminals, and if we find such pairs of combination points and nonterminals that are valid in the chart, then we backtrack to the corresponding cells. The asymptotic complexity of this post-processing step is O(nd+1), which is less than O(nωd) (ω > 2, d > 1).
This post-processing step corresponds to an algorithm that finds a parse tree, given a pre-calculated chart. If the chart was not already available when our algorithm finishes, the asymptotic complexity of this step would correspond to the asymptotic complexity of a naïve tabular parsing algorithm. It remains an open problem to adapt our algorithm to probabilistic parsing, for example—finding the highest scoring parse given a probabilistic or a weighted LCFRS (Kallmeyer and Maier 2010). See more details in Section 7.3.
7.2 General Recognition for Synchronous Parsing
Similarly to LCFRS, the rank of an SCFG is the maximal number of nonterminals that appear in the right-hand side of a rule. Any SCFG can be binarized into an LCFRS grammar. However, when the SCFG rank is arbitrarthis means that the fan-out of the LCFRS grammar can be larger than 2. This happens because binarization creates intermediate nonterminals that span several substrings, denoting binarization steps of the rule. These substrings are eventually combined into two spans, to yield the language of the SCFG grammar (Huang et al. 2009).
Our algorithm does not always improve the asymptotic complexity of SCFG parsing over tabular methods. For example, Figure 10 shows the combination of spans for the rule [S → A B C D, B D A C], along with a binarization into three simpler LCFRS rules. A naïve tabular algorithm for this rule would have the asymptotic complexity of O(n10), but the binarization shown in Figure 10 reduces this to O(n8). Our algorithm gives a complexity of O(n9 52), as the second step in the binarization shown consists of a rule with d = 4.
7.3 Generalization to Weighted Logic Programs
Weighted logic programs (WLPs) are declarative programs, in the form of Horn clauses similar to those that Prolog uses, that can be used to formulate parsing algorithms such as CKY and other types of dynamic programming algorithms or NLP inference algorithms (Eisner, Goldlust, and Smith 2005; Cohen, Simmons, and Smith 2011).
For a given Horn clause, WLPs also require a “join” operation that sums (in some semiring) over a set of possible values in the free variables in the Horn clauses. With CKY, for example, this sum will be performed on the mid-point concatenating two spans. This join operation is also the type of operation we address in this paper (for LCFRS) in order to improve their asymptotic complexity.
It remains an open question to see whether we can generalize our algorithm to arbitrary weighted logic programs. In order to create an algorithm that takes as input a weighted logic program (and a set of axioms) and “recognizes” whether the goal is achievable, we would need to have a generic way of specifying the set N, which was specialized to LCFRS in this case. Not only that, we would have to specify N in such a way that the asymptotic complexity of the WLP would improve over a simple dynamic programming algorithm (or a memoization technique).
In addition, in this paper we focus on the problem of recognition and parsing for unweighted grammars. Benedí and Sánchez (2007) showed how to generalize Valiant's algorithm in order to compute inside probabilities for a PCFG and a string. Even if we were able to generalize our addressing scheme to WLPs, it remains an open question to see whether we can go beyond recognition (or unweighted parsing).
7.4 Rytter's Algorithm
Rytter (1995) gives an algorithm for CFG parsing with the same time complexity as Valiant's, but a somewhat simpler divide-and-conquer strategy. Rytter's algorithm works by first recursively finding all chart items entirely within the first half of the string and entirely within the second half of the string. The combination step uses a shortest path computation to identify the sequence of chart items along a spine of the final parse tree, where the spine extends from the root of the tree to the terminal in position n/2. Rytter's algorithm relies on the fact that this spine, consisting of chart items that cross the midpoint of the string, forms a single path from the root to one leaf of the derivation tree. This property does not hold for general LCFRS, because two siblings in the derivation tree may both correspond to multiple spans in the string, each containing material on both sides of the string midpoint. For this reason, Rytter's algorithm does not appear to generalize easily to LCFRS.
7.5 Relation to Multiple Context-Free Grammars
Nakanishi et al. (1998) develop a matrix multiplication parsing algorithm for multiple context-free grammars (MCFGs). When these grammars are given in a binary form, they can be reduced to binary LCFRS. Similarly, binary LCFRS can be reduced to binary MCFGs. The algorithm that Nakanishi et al. develop is simpler than ours, and does not directly tackle the problem of transitive closure for LCFRS. More specifically, Nakanishi et al. multiply a seed matrix such as our T by itself in several steps, and then follow up with a copying operation between equivalent cells. They repeat this n times, where n is the sentence length. As such, the asymptotic complexity of their algorithm is identical for both balanced and unbalanced grammars, a distinction they do not make.
The complexity analysis of Nakanishi et al. is different from ours, but in certain cases, yields identical results. For example, if φ(a) = f for all , and the grammar is balanced, then both our algorithm and their algorithm give a complexity of O(nωf+1). If the grammar is unbalanced, then our algorithm gives a complexity of O(nωf), whereas the asymptotic complexity of their algorithm remains O(nωf+1). As such, Nakanishi et al.'s algorithm does not generalize Valiant's algorithm—its asymptotic complexity for context-free grammars is O(nω+1) and not O(nω).
Nakanishi et al. pose in their paper an open problem, which loosely can be reworded as the problem of finding an algorithm that computes the transitive closure of T without the extra O(n) factor that their algorithm incurs. In our paper, we provide a solution to this open problem for the case of single-initial, unbalanced grammars. The core of the solution lies in the matrix multiplication copying mechanism described in Section 4.2.
7.6 Optimal Binarization Strategies
The two main grammar parameters that affect the asymptotic complexity of parsing with LCFRS (in its general form) are the fan-out of the nonterminals and the rank of the rules. With tabular parsing, we can actually refer to the parsing complexity of a specific rule in the grammar. Its complexity is O(n ), where the parsing complexity p is the total fan-out of all nonterminals in the rule. For binary rules of the form A → B C, p = φ(A) + φ(B) + φ(C).
To optimize the tabular algorithm time complexity of parsing with a binary LCFRS, equivalent to another non-binary LCFRS, we would want to minimize the time complexity it takes to parse each rule. As such, our goal is to minimize φ(A) + φ(B) + φ(C) in the resulting binary grammar. Gildea (2011) has shown that this metric corresponds to the tree width of a dependency graph that is constructed from the grammar. It is not known whether finding the optimal binarization of an LCFRS is an NP-complete problem, but Gildea shows that a polynomial time algorithm would imply improved approximation algorithms for the treewidth of general graphs.
For a fixed binarized grammar, d is always less than p, the tabular parsing complexity, and, hence, the optimal d* over binarizations of an LCFRS is always less than the optimal p* for tabular parsing. However, whether any savings can be achieved with our algorithm depends on whether ωd* < p*, or ωd* + 1 < p* in the case of balanced grammars. Our criterion does not seem to correspond closely to a well-studied graph-theoretic concept such a treewidth, and it remains an open problem to find an efficient algorithm that minimizes this definition of parsing complexity.
It is worth noting that . As such, this gives a lower bound on the time complexity of our algorithm relative to tabular parsing using the same binarized grammar. If O(nt1) is the asymptotic complexity of our algorithm, and O(nt2) is the asymptotic complexity of a tabular algorithm, then .
8. Conclusion
We described a parsing algorithm for binary linear context-free rewriting systems that has the asymptotic complexity of O(nωd+1) where ω < 2.38, d is the “contact rank” of the grammar (the maximal number of combination points in the rules in the grammar in single-initial form), and n is the string length. Our algorithm has the asymptotic complexity of O(nωd) for a subset of binary LCFRS that are unbalanced. Our result generalizes the algorithm of Valiant (1975), and also reinforces existing results about mildly context-sensitive parsing for tree-adjoining grammars (Rajasekaran and Yooseph 1998). Our result also implies that inversion transduction grammars can be parsed in time O(n2ω+1) and that synchronous parsing with k languages has the asymptotic complexity of O(nωk+1) where k is the number of languages.
Appendix A. Notation
Table A.1 provides a table of notation for symbols used in this article.
Symbol . | Description . | 1st mention . |
---|---|---|
M(n) | The complexity of Boolean n × n matrix multiplication | §1 |
ω | Best well-known complexity for M(n), M(n) = O(nω) | §1 |
[n] | Set of integers {1, … , n} | §2 |
[n]0 | [n] ∪ {0} | §2 |
Nonterminals of the LCFRS | §2 | |
Terminal symbols of the LCFRS | §2 | |
Variables that denote spans in grammar | §2 | |
Rules in the LCFRS | §2 | |
A,B,C | Nonterminals | §2 |
f | Maximal fan-out of the LCFRS | Eq. (3) |
φ(A) | Fan-out of nonterminal A | §2 |
y | Denotes a variable in (potentially subscripted) | §2 |
T | Seed matrix | §3 |
N, N(d) | Set of indices for addresses in the matrix | Eq. (4) |
i, j | Indices for cells in T. i, j ∈ N | §4.1 |
d | Grammar contact rank | §4.1 |
M | Tij is a subset of M | §4.1 |
Copying/marking symbols for rows | §4.1 | |
Copying/marking symbols for columns | §4.1 | |
n | Length of sentence to be parsed | §1 |
< | Total order between the set of indices of T | §4.1 |
m(i, j) | Merged sorted sequence of i ∪ j, divided into pairs | §4.1 |
remove(v, x) | Removal of x from a sequence v | Figure 5 |
insert(v, x) | Insertion of x in a sequence v | §4.5 |
Π | Copying operator | §4.5 |
Symbol . | Description . | 1st mention . |
---|---|---|
M(n) | The complexity of Boolean n × n matrix multiplication | §1 |
ω | Best well-known complexity for M(n), M(n) = O(nω) | §1 |
[n] | Set of integers {1, … , n} | §2 |
[n]0 | [n] ∪ {0} | §2 |
Nonterminals of the LCFRS | §2 | |
Terminal symbols of the LCFRS | §2 | |
Variables that denote spans in grammar | §2 | |
Rules in the LCFRS | §2 | |
A,B,C | Nonterminals | §2 |
f | Maximal fan-out of the LCFRS | Eq. (3) |
φ(A) | Fan-out of nonterminal A | §2 |
y | Denotes a variable in (potentially subscripted) | §2 |
T | Seed matrix | §3 |
N, N(d) | Set of indices for addresses in the matrix | Eq. (4) |
i, j | Indices for cells in T. i, j ∈ N | §4.1 |
d | Grammar contact rank | §4.1 |
M | Tij is a subset of M | §4.1 |
Copying/marking symbols for rows | §4.1 | |
Copying/marking symbols for columns | §4.1 | |
n | Length of sentence to be parsed | §1 |
< | Total order between the set of indices of T | §4.1 |
m(i, j) | Merged sorted sequence of i ∪ j, divided into pairs | §4.1 |
remove(v, x) | Removal of x from a sequence v | Figure 5 |
insert(v, x) | Insertion of x in a sequence v | §4.5 |
Π | Copying operator | §4.5 |
Acknowledgments
The authors thank the anonymous reviewers for their comments and Adam Lopez and Giorgio Satta for useful conversations. This work was supported by NSF grant IIS-1446996 and by EPSRC grant EP/L02411X/1.
Notes
Without placing a bound on f, the problem of recognition of LCFRS languages is NP-hard (Satta 1992).
The symbols will be used for “copying commands:” (1) “from row” (); (2) “from column” (); (3) “to row” (); (4) “to column” (); (5) “unmark row” (); (6) “unmark column” ().
To see that Equation (6) is true, consider that if we take φ(B) + φ(C) variables from the spans of the r.h.s. and try to combine them together to φ(A) sequences per span of the l.h.s., we will get φ(B) + φ(C) − φ(A) points where variables “touch.” If φ(A) = 1, then this is clearly true. For φ(A) > 1, consider that for each span, we “lose” one contact point.
For two sets of real numbers, X and Y, it holds that if for all x ∈ X there is a y ∈ Y such that x < y, then max X < max Y.
General indexed grammars copy the stack to multiple nonterminals on the right-hand side.
References
Author notes
School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB, United Kingdom. E-mail: [email protected].
Department of Computer Science, University of Rochester, Rochester, NY 14627, United States. E-mail: [email protected].