Abstract
This paper explores lexicographic semirings and their application to problems in speech and language processing. Specifically, we present two instantiations of binary lexicographic semirings, one involving a pair of tropical weights, and the other a tropical weight paired with a novel string semiring we term the categorial semiring. The first of these is used to yield an exact encoding of backoff models with epsilon transitions. This lexicographic language model semiring allows for off-line optimization of exact models represented as large weighted finite-state transducers in contrast to implicit (on-line) failure transition representations. We present empirical results demonstrating that, even in simple intersection scenarios amenable to the use of failure transitions, the use of the more powerful lexicographic semiring is competitive in terms of time of intersection. The second of these lexicographic semirings is applied to the problem of extracting, from a lattice of word sequences tagged for part of speech, only the single best-scoring part of speech tagging for each word sequence. We do this by incorporating the tags as a categorial weight in the second component of a 〈Tropical, Categorial〉 lexicographic semiring, determinizing the resulting word lattice acceptor in that semiring, and then mapping the tags back as output labels of the word lattice transducer. We compare our approach to a competing method due to Povey et al. (2012).
1. Introduction
Applications of finite-state methods to problems in speech and language processing have grown significantly over the last decade and a half. From their beginnings in the 1950s and 1960s to implement small hand-built grammars (e.g., Joshi and Hopely 1996) through their applications in computational morphology in the 1980s (Koskenniemi 1983), finite-state models are now routinely applied in areas ranging from parsing (Abney 1996), to machine translation (Bangalore and Riccardi 2001; de Gispert et al. 2010), text normalization (Sproat 1996), and various areas of speech recognition including pronunciation modeling and language modeling (Mohri, Pereira, and Riley 2002).
The development of weighted finite state approaches (Mohri, Pereira, and Riley 2002; Mohri 2009) has made it possible to implement models that can rank alternative analyses. A number of weight classes—semirings—can be defined (Kuich and Salomaa 1986; Golan 1999), though for all practical purposes nearly all actual applications use the tropical semiring, whose most obvious instantiation is as a way to combine negative log probabilities of words in a hypothesis in speech recognition systems. With few exceptions (e.g., Eisner 2001), there has been relatively little work on exploring applications of different semirings, in particular structured semirings consisting of tuples of weights.
In this article we explore the use of what we term lexicographic semirings, which are tuples of weights where the comparison between a pair of tuples starts by comparing the first element of the tuple, then the second, and so forth until unequal values are found—just as lexicographic order is determined between words. We investigate two such lexicographic semirings, one based on pairs of tropical weights, and the other that uses a tropical weight paired with a novel string weight that we call the categorial semiring. The latter is based loosely on the operations of categorial grammar.
We use the first semiring to provide an exact encoding of language models as weighted finite-state transducers using epsilon arcs in place of failure arcs. The second we apply to the problem of selecting only the single-best tagging for each word sequence in a tagged lattice. In each case we formally justify the application and demonstrate the correctness and efficiency on real domains.
1.1 Definitions
Adopting the notations often used in the speech and language literature (Mohri 2009), a semiring is a 4-tuple with a nonempty set K on which two binary operations are defined, namely, the semiring plus ⊕ and semiring times ⊗, such that:
- 1.
(K, ⊕ ) is a commutative monoid with identity ;
- 2.
(K, ⊗ ) is a monoid with identity ;
- 3.
⊗ distributes over ⊕; and
- 4.
.
A weighted finite-state transducerT over a semiring is an 8-tuple (Σ, Δ, Q, I, F, E, λ, ρ) where Σ and Δ are the finite input and output alphabets, respectively; Q is a finite set of states of which I and F are initial and final subsets of states, respectively; E is a finite set of transitions between pairs of states with an input and an output alphabet as well as a semiring weight ; ε is an empty element in the alphabet; and λ and ρ are semiring weights associated with initial and final states, respectively. A weighted finite-state acceptor can be regarded as a special case where either the input or the output alphabet is an empty set.
A weighted finite-state automaton or transducer is deterministic or subsequential if no two transitions leaving the same state have the same input label. A generic determinization algorithm can transform a weighted finite-state acceptor or transducer into its deterministic form if such a form exists. For details on the algorithm and conditions for determinization, see Section 6.2 in Mohri (2009). The condition most relevant for our purpose is that the algorithm works with any weakly divisible semiring. Briefly, a semiring is said to be divisible if all non- elements admit an inverse, that is, is a group. A semiring is weakly divisible if for any x and y in K such that there exists at least one z such that (x ⊕ y) ⊗ z = x. The ⊗ is cancellative if z is unique and can be written as z = (x ⊕ y) − 1x. The non-unique case is not relevant here.
1.2 Lexicographic Semirings
The notion of weight can be extended to complex tuples of weights, and semirings over those tuples. Of interest to us here is a tuple-based semiring, the lexicographic semiring.
1.3 An Example Application of Lexicographic Semiring: Implementing Ranking in Optimality Theory
As an example of a lexicographic semiring that has a tuple length (usually) greater than 2, consider one way in which one might implement constraint ranking in Optimality Theory.
Optimality Theory (Prince and Smolensky 2004) is a popular approach in phonology and other areas of linguistics. The basic tenet of the approach is that linguistic patterns are explained by a rank-ordered set of violable constraints. Actual forms are generated via a function Gen, and selected by considering which of the forms violates the lowest-ranked constraints. Each constraint may have multiple violations, but a single violation of a higher-ranked constraint trumps any number of violations of a lower-ranked constraint.
Consider the following recent example from http://en.wikipedia.org/wiki/Optimality_theory#Example: It accounts for the form of the regular noun plural suffix in English, which is voiceless /s/ after a voiceless stop (cats), /əz/ after a sibilant (dishes), and /z/ otherwise. Quoting directly from the Wikipedia example, the following constraints in the order given account for the phenomena:
- 1.
*SS - Sibilant-Sibilant clusters are ungrammatical: one violation for everypair of adjacent sibilants in the output.
- 2.
Agree(Voi) - Agree in specification of [voi]: one violation for every pair ofadjacent obstruents in the output which disagree in voicing.
- 3.
Max - Maximize all input segments in the output: one violation for eachsegment in the input that does not appear in the output. (This constraintprevents deletion.)
- 4.
Dep - Output segments are dependent on having an input correspondent:one violation for each segment in the output that does not appear in theinput. (This constraint prevents insertion.)
- 5.
Ident(Voi) - Maintain the identity of the [voi] specification: one violationfor each segment that differs in voicing between the input and output.
Consider the example of dishes. From a presumed underlying form of dish+z, Gen generates a range of possible forms, including those in the lefthand column in the following table:
Asterisks indicate violations, and exclamation marks indicate the critical violation that rules out the particular form. Both dishs and dishz have violations of *SS, and because none of the other forms violate *SS, and *SS is highest ranked, those two violations are critical. Concomitantly, any other violations (e.g., dishs violation of Ident) are irrelevant for determining the fate of those forms. Moving down the constraint hierarchy, dish violates Max, because the suffix does not appear in this form; again this violation is critical, because the remaining two forms do not violate the constraint. Both dishis and and dishiz violate Dep because there is an inserted segment and they are thus equally bad according to that constraint. So to decide between the two forms, we go to the next lower constraint, Ident(Voi), which dishis violates because the underlying z is changed to an s. This violation is therefore critical, and the winning form is dishiz, indicated by the right-pointing hand.
There have been many finite-state models of Optimality Theory (Ellison 1994; Albro 1998; Eisner 1998; Frank and Satta 1998; Karttunen 1998; Eisner 2000), and our point here is not to provide a fully worked out implementation of the model. Rather, we wish to show that an appropriately defined lexicographic semiring can readily model the constraint ranking.
We start by defining the violation semiring as (ℤ ∪ { ∞ }, min, + , ∞ , 0); is clearly just a special case of the tropical semiring where the values of the weights are restricted to be non-negative integers. We then define the optimality semiringO as , namely, a lexicographic tuple over . The number of elements of the tuple is the same as the number of constraints needed in the system being described. If there are five rank-ordered constraints, as above, then is a 5-tuple over .
Assuming that the Gen function generates a lattice S of possible surface forms for a word, and a set of n constraints, we need a set of constraint acceptors C1 …Cn, each of which matches individual violations of the constraints, and where each violation of Ci is weighted as 〈0, 0, … , 0, 1, 0, … , 0〉, with 1 in the ith position in the weight. So in the given example, *SS would be a finite-state acceptor that allows sibilant-sibilant sequences, but only at a cost 〈1, 0, 0, 0, 0〉 per sequence. Assuming that when Gen deletes an element (as in the form dish), it marks the deletion (e.g., dish*), then we can implement Max as an acceptor that accepts the deletion symbol with cost 〈0, 0, 1, 0, 0〉 per instance. In a similar vein, assuming that any inserted elements are marked (e.g., dish>iz), then Dep will allow the insertion marker with cost 〈0, 0, 0, 1, 0〉 per instance. Finally, Ident(Voi) assumes that a change in voicing is marked somehow (e.g., dishis<), and this marker will be accepted with cost 〈0, 0, 0, 0, 1〉 per instance.
2. Paired Tropical Lexicographic Semiring and Applications
We start in this section with a simple application of a paired tropical-tropical lexicographic semiring to the problem of representing failure (φ) transitions in an n-gram language model. Although φ-transitions can be represented exactly, as we shall argue in the following, there are limitations on their use, limitations that can be overcome by representing them instead as ε arcs and lexicographic weights.
2.1 Lexicographic Language Model Semiring
Failure transitions are an implicit method for representing a much larger explicit automaton—in the case of n-gram models, all possible n-grams for that order. During composition with the model, the failure transition must be interpreted on the fly, keeping track of those symbols that have already been found leaving the original state, and only allowing failure transition traversal for symbols that have not been found (the semantics of “otherwise”). This compact implicit representation cannot generally be preserved when composing with other models, for example, when combining a language model with a pronunciation lexicon as in widely used FST approaches to speech recognition (Mohri, Pereira, and Riley 2002). Moving from implicit to explicit representation when performing such a composition leads to an explosion in the size of the resulting transducer, frequently making the approach intractable. In practice, an off-line approximation to the model is made, typically by treating the failure transitions as epsilon transitions (Mohri, Pereira, and Riley 2002; Allauzen, Mohri, and Roark 2003), allowing large transducers to be composed and optimized off-line. These complex approximate transducers are then used during first-pass decoding, and the resulting pruned search graphs (e.g., word lattices) can be rescored with exact language models encoded with failure transitions. Failure transitions can be used to exactly encode a wide range of language models, including class-based language models (Allauzen, Mohri, and Roark 2003) or discriminatively trained n-gram language models (Roark, Saraclar, and Collins 2007)—allowing for full lattice rescoring rather than n-best list extraction.
Similar problems arise when building, say, POS taggers as WFSTs: Not every POS tag sequence will have been observed during training, hence failure transitions will achieve great savings in the size of models. Yet discriminative models may include complex features that combine both input stream (word) and output stream (tag) sequences in a single feature, yielding complicated transducer topologies for which effective use of failure transitions may not be possible. An exact encoding using other mechanisms is required in such cases to allow for off-line representation and optimization.
2.1.1 Standard Encoding
For language model encoding, we will differentiate between two classes of transitions: backoff arcs (labeled with a φ for failure, or with ε using our new semiring); and n-gram arcs (everything else, labeled with the word whose probability is assigned). Each state in the automaton represents an n-gram history string h and each n-gram arc is weighted with the (negative log) conditional probability of the word w labeling the arc given the history h. We assume that, for every n-gram hw explicitly represented in the language model, every proper prefix and every proper suffix of that n-gram is also represented in the model. Hence, if h is a state in the model, then h′ (the suffix of h of length |h|−1) will also be a state in the model. For a given history h and n-gram arc labeled with a word w, the destination of the arc is the state associated with the longest suffix of the string hw that is a history in the model. This will depend on the Markov order of the n-gram model. For example, consider the trigram model schematic shown in Figure 1, in which only history sequences of length 2 are kept in the model. Thus, from history hi = wi − 2wi − 1, the word wi transitions to hi + 1 = wi − 1wi, which is the longest suffix of hiwi in the model.
As detailed in the “otherwise” semantics of Equation (3), backoff arcs transition from state h to a state h′, typically the suffix of h of length |h| − 1, with weight (− logαh). We call the destination state a backoff state. This recursive backoff topology terminates at the unigram state (i.e., h = ε, no history).
Backoff states of order k may be traversed either via φ-arcs from the higher order n-gram of order k + 1 or via an n-gram arc from a lower order n-gram of order k − 1. This means that no n-gram arc can enter the zeroeth order state (final backoff), and full-order states (history strings of length n − 1 for a model of order n) may have n-gram arcs entering from other full-order states as well as from backoff states of history size n − 2.
2.1.2 Exact Encoding of a Backoff Model with Lexicographic Language Model Semiring
For an LM machine M on the tropical semiring with failure transitions, we can simulate φ-arcs in a standard LM topology by a topologically equivalent machine M′ on the lexicographic semiring, where φ has been replaced with epsilon, as follows. Let si and s′i be equivalent states in M and M′, respectively. For every n-gram arc with label w and weight c, source state si and destination state sj, construct an n-gram arc with label w, weight 〈0, c〉, source state s′i, and destination state s′j. The exit cost of each state is constructed as follows. If the state is non-final, the cost is 〈∞, ∞〉. Otherwise if it is final with exit cost c, it will be 〈0, c〉.
The pseudocode for converting a failure encoded language model into lexicographic language model semiring is enumerated in Figure 2 and illustrated in Figure 3.
Let n be the length of the longest history string in the model. For every φ-arc with (backoff) weight c, source state si, and destination state sj representing a history of length k, construct an ε-arc with source state s′i, destination state s′j, and weight 〈Φ ⊗ (n − k), c〉, where Φ > 0 and Φ ⊗ (n − k) takes Φ to the (n − k)th power with the ⊗ operation. In the tropical semiring, ⊗ is +, so Φ ⊗ (n − k) = (n − k)Φ. For example, in a trigram model, if we are backing off from a bigram state h (history length = 1) to a unigram state, n − k = 2 − 0 = 2, so we set the backoff weight to 〈2Φ, − log αh) for some Φ > 0. In the special case where the φ-arc has weight ∞, which can happen in some language model topologies, the corresponding weight will be 〈∞, ∞〉.
2.2 Proof of Equivalence
We wish to prove that for any machine N, ShortestPath(M′ ∩ N′) passes through the equivalent states in M′ to those passed through in M for ShortestPath(M ∩ N). Therefore determinization of the resulting intersection after ε-removal yields the same topology as intersection with the equivalent φ machine. Intuitively, because the first dimension of the weights is 0 for n-gram arcs and > 0 for backoff arcs, the shortest path will traverse the fewest possible backoff arcs; further, because higher-order backoff arcs cost less in the first dimension of the weights in M′, the shortest path will include n-gram arcs at their earliest possible point.
We prove this by induction on the state-sequence of the path p/p′ up to a given state si/s′i in the respective machines M/M′.
Base case: If p/p′ is of length 0, and therefore the states si/s′i are the initial states of the respective machines, the proposition clearly holds.
Inductive step: Now suppose that p/p′ visits s0 … si/s′0 … s′i and we have therefore reached si/s′i in the respective machines. Suppose the cumulated weights of p/p′ are W and 〈Ψ, W〉, respectively. We wish to show that whichever sj is next visited on p (i.e., the path becomes s0 … sisj), the equivalent state s′ is visited on p′ (i.e., the path becomes s′0 … s′is′j).
Let w be the next symbol to be matched leaving states si and s′i. There are four cases to consider:
- 1.
There is an n-gram arc leaving states si and s′i labeled with w, but nobackoff arc leaving the state.
- 2.
There is no n-gram arc labeled with w leaving the states, but there is abackoff arc.
- 3.
There is no n-gram arc labeled with w and no backoff arc leaving thestates.
- 4.
There is both an n-gram arc labeled with w and a backoff arc leavingthe states.
In order to emit w by taking the backoff arc out of state s′i, one or more backoff (ε) transitions must be taken, followed by an n-gram arc labeled with w. Let k be the order of the history represented by state s′i, hence the cost of the first backoff arc is in our semiring. If we traverse m backoff arcs prior to emitting the w, the first dimension of our accumulated cost will be , based on our algorithm for the construction of M′ given in Section 2.1.2. Let s′l be the destination state after traversing m backoff arcs followed by an n-gram arc labeled with w. Note that, by definition, m ≤ k, and k − m + 1 is the order of state s′l. Based on the construction algorithm, the state s′l is also reachable by first emitting w from state s′i to reach state s′j followed by some number of backoff transitions, as can be seen from the paths between state wi − 1 and wi in the trigram model schematic in Figure 1. The order of state s′j is either k (if k is the highest order in the model) or k + 1 (by extending the history of state s′i by one word). If it is of order k, then it will require m − 1 backoff arcs to reach state s′l, one fewer than the path to state s′l that begins with a backoff arc, for a total cost of , which is less than . If state s′j is of order k + 1, there will be m backoff arcs to reach state s′l, but with a total cost of , which is also less than . Hence the state s′l can always be reached from s′i with a lower cost through state s′j than by first taking the backoff arc from s′i. Therefore the shortest path on M′ must follow s′0 … s′is′j.
This completes the proof.
2.3 Experimental Comparison of ε, φ, and Encoded Language Models
For our experiments we used lattices derived from a very large vocabulary continuous speech recognition system, which was built for the 2007 GALE Arabic speech recognition task, and used in the work reported in Lehr and Shafran (2011). The lexicographic semiring was evaluated on the development set (2.6 hours of broadcast news and conversations; 18K words). The 888 word lattices for the development set were generated using a competitive baseline system with acoustic models trained on about 1,000 hours of Arabic broadcast data and a 4-gram language model. The language model consisting of 122M n-grams was estimated by interpolating 14 components. The vocabulary is relatively large at 737K, and the associated dictionary has only single pronunciations.
The language model was converted to the automaton topology described earlier, using OpenFst (Allauzen et al. 2007), and represented in three ways: (1) as an approximation of a failure machine using epsilons instead of failure arcs; (2) as a correct failure machine; and (3) using the lexicographic construction derived in this article. Note that all of these options are available for representing language models in the OpenGrm library (Roark et al. 2012).
The three versions of the LM were evaluated by intersecting them with the 888 lattices of the development set. The overall error rate for the systems was 24.8%—comparable to the state-of-the-art on this task.2 For the shortest paths, the failure and lexicographic machines always produced identical lattices (as determined by FST equivalence); in contrast, 78.6% of the shortest paths from the epsilon approximation are different, at least in terms of weights, from the shortest paths using the failure LM. For full lattices 6.1% of the lexicographic outputs differ from the failure LM outputs, due to small floating point rounding issues; 98.9% of the epsilon approximation outputs differ.3
In terms of size, the failure LM, with 5.7 million arcs, requires 97 Mb. The equivalent -lexicographic LM requires 120 Mb, due to the doubling of the size of the weights.4 To measure speed, we performed the intersections 1,000 times for each of our 888 lattices on a 2993 MHz Intel Xeon CPU, and took the mean times for each of our methods. The 888 lattices were processed with a mean of 1.62 seconds in total (1.8 msec per lattice) using the failure LM; using the -lexicographic LM required 1.8 seconds (2.0 msec per lattice), and is thus about 11% slower. Epsilon approximation, where the failure arcs are approximated with epsilon arcs, took 1.17 seconds (1.3 msec per lattice). The slightly slower speeds for the exact method using the failure LM, and are due to the overhead of (1) computation of the failure function at runtime for the failure LM, and (2) determinization for the representation. After intersection (and determinization, if required), there is no size difference in the lattices resulting from any of the three methods.
In this section we have shown that the failure-arc representation of backoff in a finite-state language model topology can be exactly represented using ε arcs, and weights in the lexicographic semiring.
We turn in the next section to another application of lexicographic semirings, this time involving a novel string semiring as one of the components.
3. Tagging Determinization on Lattices
In many applications of speech and language processing, we generate intermediate results in the form of a lattice to which we apply finite-state operations. For example, we might POS tag the words in an ASR output lattice as an intermediate stage for detecting out-of-vocabulary nouns. This involves composing the lattices with a POS tagger and will result in a weighted transducer that maps from input words to tags.
Suppose we want from that transducer all the recognized word sequences, but for each word sequence just the single-best tagging. One obvious way to do this would be to extract sublattices containing all possible taggings of each word sequence, compute the shortest path of each such sublattice, and unite the results back together. There are various ways this might be accomplished algorithmically, but in general it will be an expensive operation.
With a little thought it will be clear that at an appropriate level of abstraction the problem we have just described involves determinization. That is, the result is deterministic in the sense that for any input, there is a unique path through the lattice. But one cannot simply apply transducer determinization because, for one reason, any given input may have multiple outputs and thus is non-functional and not even p-subsequential (Mohri 2009).
In this section we describe two methods, both of which make use of novel weight classes consisting of a pair of a tropical weight and a string weight, which allow a solution that involves determinization on an acceptor in that semiring. One, due to Povey et al. 2012, is described in Section 3.1. Our own work, also previously reported in Shafran et al. (2011), is presented in Sections 3.2 and 3.3. In Section 3.4 we compare the approaches for efficiency.
3.1 Povey et al.'s Approach
For efficiency reasons, determinization and epsilon removal (which is optimized for this particular type of weight) are done simultaneously in their method. For the string part, they use a data structure involving a hash table which enables string concatenation in linear time.
3.2 Categorial Semiring
An alternative approach to that of Povey et al. eschews a special definition of determinization, using instead the standard definition already provided in the OpenFst library. To this end, we designed a lexicographic weight pair that incorporates a tropical weight as the first dimension and a novel form of string weight for the second dimension to represent the tags. Note that the standard string weight (e.g., that implemented in the OpenFst library) will not do. In that semiring, w1 ⊗ w2 is defined as concatenation; and w1 ⊕ w2 is defined as the longest common prefix of w1 and w2, which is not in general equal to either w1 or w2. Thus the string weight class does not have the path property, and hence it cannot be used as an element of a lexicographic semiring tuple.
We can solve that problem by having w1 ⊕ w2 be the lexicographic minimum (according to some definition of string ordering) of w1 and w2, which will guarantee that the semiring has the path property. But now we need a way to make the semiring weakly divisible, so that when weights are pushed during the determinization operation, the “loser” can be preserved. For a string weight, this can be achieved by recording the division so that a subsequent ⊗ operation with the appropriate (inverse) string is cancellative. Thus if x ⊕ y = y, then there should be a z = y\x, such that (x ⊕ y) ⊕ z = (x ⊕ y) ⊕ y\x = y ⊕ y\x = x.
A natural model for this is categorial grammar (Lambek 1958). In categorial grammar, there are a set of primitive categories, such as N, V, NP, as well as a set of complex types constructed out of left (\) or right (/) division operators. An expression X\Y denotes a category that, when combined with an X on its left, makes a Y. For example, a verb phrase in English could be represented as a NP\S, because when combined with an NP on the left, it makes an S. Similarly, a determiner is NP\N, because it combines with an N on the right to make an NP.
Note that the history now defines the natural ordering of the semiring. Returning to the earlier problematic case we note that it is still the case that c ⊗ (c\a ⊕ b) = c ⊗ b = c_b. This is because for c\a ⊕ b, , so that c\a ⊕ b = b. For (c ⊗ c\a) ⊕ (c ⊗ b), however, we now get the same result. (c ⊗ b) has both a history and a value of c_b. (c ⊗ c\a), on the other hand, has a value of a as before, but a history of (c_c\a). The sum of these weights is determined by the lexicographic comparison , and thus (c ⊗ c\a) ⊕ (c ⊗ b) = c_b.
Note that one difference between the categorial semiring and standard categorial grammar is that the categorial semiring division may involve complex categorial weights that are themselves concatenated, as we have already seen. For example, one may need to left-divide a category NN by a complex category that itself involves a division and a multiplication. We might thus produce a category such as 〈VB\JJ_NN〉\NN. We assume division has precedence over multiplication (concatenation), so in order to represent this complex category, the disambiguating brackets 〈〉 are needed. The interpretation of this category is something that, when combined with the category VB\JJ_NN on the left, makes an NN.
3.3 Implementation of Tagging Determinization Using a Lexicographic Semiring
Having chosen the semirings for the first and second weights in the transformed weighted finite-state automaton, we now need to define a joint semiring over both the weights and specify its operation. For this we return to the lexicographic semiring. Specifically, we define the lexicographic semiring over a tuple of tropical and left-categorial weights, inheriting their corresponding identity elements. The and elements for the categorial component are defined the same way as in the standard string semiring, namely, respectively, as the infinite string, and as the empty string ε, discussed previously.
A Sketch of a Proof of Correctness: The correctness of this lexicographic semiring, combined with determinization, for our problem could be shown by tracing the results of operation in a generic determinization algorithm, as in Mohri (2009). Instead, here we provide an intuition using the example in Figure 4. The two input strings fine me and fine mead share the prefix fine. In the first case, fine is a verb (VB), whereas in the second it is an adjective (JJ). When two outgoing arcs have the same input symbols, the determinization algorithm chooses the arc with the lowest weight, 〈1, JJ〉. For potential future use the other weight 〈2, VB〉 is divided by the lowest weight 〈1, JJ〉 and the result 〈1, JJ\VB〉 is saved. (Note that the divide operation for the tropical semiring is arithmetic subtraction.) When processing the next set of arcs, the determinization algorithm will encounter two paths for the input fine mead. The accumulated weight on the path through nodes 0-2-3 is straightforward and is 〈1, JJ〉 ⊗ 〈6, NN〉 = 〈7, JJ_NN〉. The accumulated weight computed by the determinization algorithm through 0-1-3 consists of three components: the lowest weight for fine, the saved residual, and the arc weight for mead from 1-3. Thus, the accumulated weight for 0-1-3 for fine mead is 〈1, JJ〉 ⊗ 〈1, JJ\VB〉 ⊗ 〈7, NN〉 = 〈9, VB_NN〉. From the two possible paths that terminate at node 3 with input string fine mead, the determinization algorithm will pick one with the lowest accumulated weight, 〈7, JJ_NN〉 ⊕ 〈9, VB_NN〉 = 〈7, JJ_NN〉, the expected result. Similarly, the determinization algorithm for the input fine me will result in picking the weight 〈5, VB_PRP〉. Thus, the determinization algorithm will produce the desired result for both input strings in Figure 4 and this can be shown to be true in general.
After determinization, the output symbols (tags) on the second weight may accumulate in certain paths, as in the earlier example. These weights need to be mapped back to associated input symbols (words). This mapping and the complete procedure for computing the single-best transduction paths for all unique input sequences for a given WFST (word lattice) using the lexicographic semiring is described in the next few sections. Note that our categorial semiring allows for synchronizing the resulting output labels with their associated input labels, which the Povey et al. (2012) approach in general does not.
3.3.1 Lattice Representation
Consider a lattice transducer where input labels are words (e.g., generated by a speech recognizer), output labels are tags (e.g., generated by a part-of-speech tagger), and weights in the tropical semiring represent negative log probabilities. For example, the toy lattice in Figure 5 has four paths, with two possible tag sequences for the string Time flies like an arrow. In general, for any given word sequence, there may be many paths in the lattice with that word sequence, with different costs corresponding to different ways of deriving that word sequence from the acoustic input, as well as different possible ways of tagging the input.
The procedure for removing all but the single best-scoring path for each input word sequence is as follows. We convert the weighted transducer to an equivalent acceptor in the -lexicographic semiring as in the algorithm in Figure 6. This acceptor is then determinized in the -lexicographic semiring, to yield a lattice where each distinct sequence of input-labels (words) corresponds to a single path. The result of converting the lattice in Figure 5 to the semiring, followed by determinization, and conversion back to the tropical semiring, is shown in Figure 7. Note now that there are three paths, as desired, and that the tags on several of the paths are complex categorial tags.
We now have an acceptor in the -lexicographic semiring with, in general, complex categorial weights in the second component of the weight pair. It is now necessary to simplify these categorial weight sequences down to sequences of simplex categories, and reconstruct a transducer that maps words to tags with tropical weights. Figure 8 presents the result of such a simplification.
There are two approaches to this, outlined in the next two sections. The first involves pushing -lexicographic weights back from the final states, splitting states as needed, and then reconstructing the now simple categorial weights as output labels on the lattice. The latter reconstruction is essentially the inverse of the algorithm in Figure 6. The second approach involves creating a transducer in the tropical semiring with the input labels as words, and the output labels as complex tags. For this approach we need to construct a mapper transducer which, when composed with the lattice, will reconstruct the appropriate sequences of simplex tags.
3.3.2 State Splitting and Weight Pushing
In the first approach we push weights back from the final states, thus requiring a reverse topological traversal of states in the lattice. The categorial weights of each arc are split into a prefix and a suffix, according to the SplitWeight function of Figure 9. The prefixes will be pushed towards the initial state, but if there are multiple prefixes associated with arcs leaving the state, then the state will need to be split: For k distinct prefixes, k distinct states are required. The PushSplit algorithm in Figure 10 first accumulates the set of distinct prefixes at each state (lines 5–13), as well as storing the vector of arcs leaving the state, which will be subsequently modified. For each prefix, a new state is created (lines 22–25), although the first prefix in the set simply uses the state itself. Note that any categorial weight associated with the final cost yields the first prefix, meaning that it would be assigned the already existing state; hence all newly created states can be non-final. Each state is thus associated with a distinct single prefix, and each must be reachable from the same set of previous states as the original state. Thus, for each new state, any arc that already has the original state as its destination state must be copied, and the new arc assigned the new destination state and weight, depending on the prefix associated with the new state (lines 26–30). The prefix associated with the original state must then be pushed onto the appropriate arcs (line 29). Finally, because all the prefix values have been pushed, each arc from the original state must be updated so that only the suffix value remains in the weight, now leaving the state associated with the original weight's prefix (lines 31–34).
3.3.3 Mapper Approach
In the second approach, we build a mapper FST (M) that converts sequences of complex tags back to sequences of simple tags. The algorithm for constructing this mapper is given in Figure 11, and an illustration can be found in Figure 12. In essence, sequences of observed complex tags are interpreted and the resulting simplex tags are assigned to the output tape of the transducer. Simplex tags in the lattice are mapped to themselves in the mapper FST (line 6 of the function BuildMapper in Figure 11); complex tags require longer paths, the construction of which is detailed in the MakePath function. The complex labels are parsed, and required input and output labels are placed on LIFO queues (lines 3–7). Then a path is created from state 0 in the mapper FST that eventually returns to state 0, labeled with the appropriate input and output sequences (lines 9–15).
Once the mapper FST has been constructed, the determinized transducer is composed with the mapper—L′ ∘ M—to yield the desired result, after projecting onto output labels. Note, crucially, that the mapper will in general change the topology of the determinized acceptor, splitting states as needed. This can be seen by comparing Figures 7 and 8. Indeed, the mapping approach and PushSplit are completely equivalent, and, as we shall see, have similar time efficiency.
The construction in the case of the PushSplit algorithm is more direct because it operates on the determinized lattice before it is converted back to the tropical semiring; after which the simplex categories are reconstructed onto the output labels to yield a transducer identical to that in Figure 8.
3.4 Experimental Comparisons Between Povey et al.'s and -Lexicographic Semirings
3.4.1 POS-Tagging Problem
Our solutions were empirically evaluated on 4,664 lattices from the NIST English CTS RT Dev04 test set. The lattices were generated using a state-of-the-art speech recognizer, similar to Soltau et al. (2005), trained on about 2,000 hours of data, which performed at a word error rate of about 24%. The utterances were decoded in three stages using speaker independent models, vocal-tract length normalized models, and speaker-adapted models. The three sets of models were similar in complexity with 8,000 clustered pentaphone states and 150K Gaussians with diagonal covariances.
The lattices from the recognizer were tagged using a weighted finite state tagger. The tagger was trained on the Switchboard portion of the Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993). Treebank tokenization is different from the recognizer tokenization in some instances, such as for contractions (“don't” becomes “do n't”) or possessives (“aaron's” becomes “aaron 's”). Further, many of the words in the recognizer vocabulary of 93k words are unobserved in tagger training, and are mapped to an OOV token “unk”. Words in the treebank not in the recognizer vocabulary are also mapped to “unk”, thus providing probability mass for that token in the tagger. A tokenization transducer was created to map from recognizer vocabulary to tagger vocabulary.
Two POS-tagging models were trained: a first-order and a third-order hidden Markov model (HMM), estimated and encoded in tagging transducers . In the first-order HMM model, the transition probability is conditioned on the previous word's tag, whereas in the third-order model the transition probability is conditioned on the previous three words' tags. The transition probabilities are smoothed using Witten-Bell smoothing, and backoff smoothing is achieved using failure transitions. For each word in the tagger input vocabulary, only POS-tags observed with each word are allowed for that word; that is, emission probability is not smoothed and is zero for unobserved tag/word pairs. For a given word lattice , it is first composed with the tokenizer , then with the POS tagger to produce a transducer with original lattice word strings on the input side and tag strings on the output side.
These models were validated on a 2,000-sentence held-aside subset of the Switchboard treebank. The first-order model achieved 91.4% tagging accuracy, and the third-order model 93.8% accuracy, which is competitive for this particular task: Eidelman, Huang, and Harper (2010) reported accuracy of 92.4% for an HMM tagger on this task (though for a different validation set). Both models likely suffer from using a single “unk” category, which is relatively coarse and does not capture informative suffix and prefix features that are common in such models for tagging OOVs. For the purposes of this article, these models serve to demonstrate the utility of the new lexicographic semiring using realistic models. A similar WFST topology can be used for discriminatively trained models using richer feature sets, which would potentially achieve higher accuracy on the task.
The tagged lattices, obtained from composing the ASR lattice with the POS tagger, were then converted to -lexicographic semiring, determinized in this lexicographic semiring, and then converted back using the mapper transducer as discussed in Section 3.3.1. Note that the computational cost of this conversion is proportional to the number of arcs in the lattice and hence is significantly lower than the overhead incurred in the conventional approach of extracting all unique paths in the lattice and converting the paths back to a lattice after tagging.
The results of this operation were compared with the method of taking the 1,000 best paths through the original lattice, and removing any path where the path's word sequence had been seen in a lower-cost path. This generally resulted in a rank-ordered set of paths with n < 1,000 members.
In all cases the n-best paths produced by the method proposed in this article were identical to the n-best paths produced by the method just described. The only differences were due to minor floating-point number differences (expected due to weight-pushing in determinization), and cases where equivalent weighted paths were output in different orders.
3.4.2 Results
Despite large overall commonalities between Povey et al.'s approach (henceforth Povey), and 〈T, C〉 lexicographic approaches (henceforth TC), there are some interesting differences between the two. One difference is that the highly structured categorial weights used in TC are more complex than the string weight used in Povey. Another important difference in the approaches is the synchronization issue. In TC, the original input symbols are synchronized with determinized output symbols, whereas in Povey they are not. TC uses the semantics of the categorial grammar to keep the history of the operations while determinizing a lattice, whereas Povey lacks this semantics. Although POS-tagging is a task that by definition has one tag per input token, many other tasks of interest (e.g., finding the most likely pronunciation or state sequence) will have a variable number of output labels per token, making synchronization in the absence of such semantics more difficult. Hence, these differences may affect time and space complexity, feasibility, and ease of use of the approaches in various tasks.
In this section, we compare the efficiency of the two approaches under the same situations on the same data. We ran the experiments detailed in Section 3.4.1 in three conditions: Povey in the Kaldi toolkit (Povey et al. 2011) (with specialized determinization); and both Povey and TC in the OpenFST library (with general determinization). This allows us to tease apart the impact of the differences in the approaches that are due to the specialized determinization versus differences in the weight definitions. There would be nothing in principle to prevent the simultaneous epsilon removal being implemented in OpenFst for use with general determinization in 〈T, C〉 lexicographic semiring, although this is not the focus of this article.
We compare these conditions in terms of running time, memory usage, and required disk space. Tables 1(a) and 1(b) show efficiency results of determinizing lattices tagged using the first-order HMM tagger, and Tables 2(a), and 2(b) show those results for the third-order HMM tagger.
From Table 1(a) we see that Povey is faster and demands less memory compared with TC. However, results using Povey with general determinization show that the memory demands between the two approaches are similar in the absence of the specialized determinization. We also see that the average number of intermediate tags produced during determinization in Povey is larger, whereas the average length of intermediate tags is smaller, than those in TC. This is due to the fact that the categorial semiring keeps a complete history of operations by appending complex tags. We do not perform any special string compression on these tags, which may yield performance improvements (particularly with the larger POS-tagging model, as demonstrated in Table 2).
We compared the approach of using the mapper with that of the PushSplit algorithm in TC. The outputs were equivalent in both cases and the time and space complexities were comparable. The PushSplit algorithm was slightly more efficient than the mapper approach, although the difference is not significant.
While the intermediate space and processing time is larger for TC, we see from Table 1(b) that the output lattices resulting from TC are smaller than the output lattices in Povey in terms of number of states, transitions, input/output epsilons, and required disk space. Because the lattices produced by Povey are not synchronized, they contain many input/output epsilons, and therefore an increased number of states and transitions. In contrast, the lattices output by TC are synchronized and minimal. The size differences are even larger between the two approaches when both are using general determinization.
As Tables 2(a) and 2(b) show, time and space efficiencies in tagging using the third-order HMM tagger follow the same pattern as those using the first-order HMM tagger, although the differences are more pronounced than in the former. We report these results on a subset of 4,000 out of 4,664 test lattices, chosen based on input lattice size so as to avoid cases of very high intermediate memory usage in general determinization. This high intermediate memory usage does argue for the specialized determinization, and was the rationale for that algorithm in Povey et al. (2012). The non-optimized string representation within the categorial semiring makes this even more of an issue for TC than Povey. Again, though, the size of the resulting lattice is much more compact when using the lexicographic 〈T, C〉 semiring. We leave investigation of an optimized string representation, such as storing the history only if it is different from the value, using the hash table data structure, or memory caching, to future work.
4. Combining the Semirings
In this article, we have described two lexicographic semirings, each consisting of a weight pair. Suppose one wished to combine these two in a system that tags a lattice, and then selects the single best tagging for each word sequence. An obvious way to do this would be to implement a two-stage process. Apply the n-gram Markov model of the tagger with the backoff strategy implemented using the paired tropical semiring in Section 2 with tags as acceptor labels. Then, convert the resulting transducer into the lexicographic 〈T, C〉 semiring with words as acceptor labels and determinize to obtain the correct results.
Because the lexicographic semiring is extensible, one might also think of combining the two semirings into a single 〈T, T, C〉 lexicographic triple where, for example, the first dimension is the failure arc cost, the second dimension holds the tag cost (n-gram transition costs of tags and the cost of observing the word given the tag), and the third dimension holds the tags represented in the categorial semiring. One might then compose the tagging model with the lattice, and then determinize in one step in the triple semiring.
Although this works in the sense that it is technically possible to construct this semiring and determinize in it, it yields the wrong results. The reason for this is that the lexicographic semirings for the two tasks (the tagging task and the subsequent determinization of the tagged lattice) involve determinization with respect to different labels. In the first task, the backoff models are defined with respect to the Markov chain or n-grams of the tags and the labels on the resulting acceptor are tags. In the second task, the determinization needs to be performed with respect to the word labels to obtain unique tags for all word sequences. A cross product of the two types of labels would not accomplish the task either, because the determinization would then produce unique paths for all word and tag combinations, and not the best tag sequences for all word sequences. There is no obvious or easy way to determinize with respect to both sets of labels simultaneously.
We can illustrate this problem with an example, which is also useful for clearly understanding how each of the semirings functions. The simple example involves a cost-free word lattice consisting of two paths aa and ba, in a scenario where word a can take two possible tags A or B. We will assign variables to model costs, so that we can illustrate the range of scenarios where the use of the triple semiring will yield an incorrrect answer, and why. Let c(a:A) be the cost of the tag A with word a, which in our HMM POS tagger is –log P(a | A). Let g(x,y) be the cost in the grammar (tag sequence model) of transitioning from state x to state y in the model. See Figure 13 for our example L, T, L ∘ T, and G. All costs in the example are in the semiring for ease of explication; the first dimension of the cost is zero except for backoff arcs in G.
In Figure 14 we show the result of L ∘ T ∘ G both after simple composition and after epsilon removal and conversion from a transducer in the semiring to an acceptor in the semiring. In the second and third WFSTs, we highlight the paths that have zero cost in the first dimension of that semiring, which are the only paths that can result from determinization (whatever the model costs). These paths only include tag B for the initial instance of symbol a. However, if g(0,2) + c(a:A) + g(2,3) + c(a:A) + g(3,3) < c(a:B) + g(0,1) + c(a:A) + g(1,3), then the tag sequence a:A a:A would have lower (second dimension) cost than a:B a:A, despite having taken a backoff arc. Because using a backoff arc is the only way to produce the tag sequence AA, then that path should be the result. In order to get the correct result, one must first determinize with x:Y labels as unit (using fstencode) in the semiring; then project into the semiring and determinize again.
5. Conclusions
In this article, we have introduced two applications of lexicographic semirings to speech and language processing problems. The first application used the lexicographic semiring to provide an exact encoding of failure arcs in an n-gram language model using an epsilon representation for the failure arc. This lexicographic language model semiring allows much more flexibility in combining the language model with other linguistic models without danger of prohibitive blow-up in the size of the resulting transducers: for example, precomposing the language model with a lexicon and a context model in a CLG model of speech recognition (Mohri, Pereira, and Riley 2002).
The second application was of a lexicographic semiring to the problem of determinizing a tagged word lattice so that each word sequence has the single best tag sequence. This was accomplished by encoding the tags as the second dimension of the semiring, then determinizing the resulting acceptor. Finally we map the second dimension categorial weights back as output labels. This latter stage generally requires that we push complex categorial weights back to reconstruct a sequence of simplex categories, an operation that can be performed in two distinct and equally efficient ways. As part of this work we developed a novel string semiring, the categorial semiring, which we have described in detail for the first time here.
For both of these applications, the lexicographic semiring solution was shown to be competitive in terms of efficiency with alternative approaches.
In the future, one can imagine various extensions of the core ideas presented here to further applications. For example, one might use an Optimality Theory–inspired model with ranked constraints implemented using a lexicographic semiring as part of a pronunciation modeling system that ranks pronunciations according to the degree to which they violate various constraints of the language.. The -lexicographic semiring introduced in Section 3 can be generalized to compute the single-best transduction path in multi-tape weighted transducers. For instance, by encoding the arc likelihoods, the phone sequence, the clustered allophone sequence, acoustic state sequence, and acoustic segmental duration associated with word sequence as a lexicographic semiring and determinizing the resulting automaton, we can extract the tags corresponding to the single-best word sequence. Thus, our method is much more flexible and powerful than algorithms developed specifically for determinizing POS-tagged word lattices as in Roche and Schabes (1995) or approximations specific to applications as in Shugrina (2010).
All of the software described in this article is publicly available. The lexicographic semiring is distributed as part of the core OpenFst distribution at http://www.openfst.org. The categorial semiring is available in the contributed section at http://www.openfst.org/twiki/bin/view/Contrib/FstContrib. The categorial rescoring methods including both the mapping and push-split approaches are available from http://www.opengrm.org.
Acknowledgments
Some results in this paper were reported in conference papers: Roark, Sproat, and Shafran (2011) and Shafran et al. (2011). This research was supported in part by NSF grants IIS-0811745, IIS-0905095, and IIS-0964102, and DARPA grant HR0011-09-1-0041. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF or DARPA. We also thank Cyril Allauzen for useful discussion.
Notes
For partial orderings, where multiple constraints are at the same level in the absolute dominance hierarchy, just one dimension would be required for all constraints at the same level.
The error rate is a couple of points higher than in Lehr and Shafran (2011) because we discarded non-lexical words, which are absent in maximum likelihood estimated language model and are typically augmented to the unigram backoff state with an arbitrary cost, fine-tuned to optimize performance for a given task.
The very slight differences in these percentages (less than 3% absolute in all cases) versus those originally reported in Roark, Sproat, and Shafran (2011) are due to small changes in conversion from ARPA format language models to OpenFst encoding in the OpenGrm library (Roark et al. 2012), related to ensuring that, for every n-gram explicitly included in the model, every proper prefix and proper suffix is also included in the model, something that the ARPA format does not require.
If size became an issue, the first dimension of the -weight can be represented by a single byte.
References
Author notes
Google Inc., 76 Ninth Ave, 4th Floor, New York, NY 10011, USA. E-mail: {rws,roark}@google.com.
Center for Spoken Language Understanding, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, GH40, Portland, OR 97239-3098, USA. Emails: {mahsa.yarmohamadi,zakshafran}@gmail.com.