## Abstract

Tesar (2014) develops the notion of *output-drivenness*, provides guarantees that Optimality Theory grammars satisfy it, and demonstrates its learnability implications. This article discusses the extension of Tesar’s theory to a representational framework with partial phonological features. It considers a hierarchy of notions of output-drivenness of increasing strength that can be defined within this extended framework. It determines the strongest notion of output-drivenness that holds in the case of partial features. And it shows that the learnability implications discussed by Tesar carry over to a framework with partial features only if feature undefinedness is effectively treated by identity faithfulness constraints as an additional feature value.

## 1 Introduction

Tesar’s (2014) notion of *output-drivenness* formalizes the intuition that any discrepancy between an underlying and a surface (or *output*) form is *driven* exclusively by the goal of making the surface form fit the phonotactics. Non-output-drivenness unifies various opaque phonological phenomena such as *chain shifts* (Łubowicz 2011) and *derived environment effects* (or *saltations*; White 2014). Tesar’s theory of output-drivenness within *Optimality Theory* (OT; Prince and Smolensky 2004) has two parts. First, Tesar develops general sufficient constraint conditions for the output-drivenness of OT grammars. Second, Tesar distills the learnability implications of output-drivenness for the classical *inconsistency detection* approach (Merchant 2008) to learning underlying forms in OT.

Tesar’s theory is developed within a feature-based representational framework that assumes all phonological features to be *total*—that is, defined for every segment. Yet a number of phonetic and phonological arguments have been put forward in favor of *partial* features. To start, if manner and place features describe the degree and the place of an oral constriction, how can they be defined for laryngeal segments [h ɦ ʔ], which involve no oral constriction (McCarthy 1988)? As another example, given that a feature such as [delayed release] is simply irrelevant for sonorants, how could its value be defined for sonorants (Hall 2007)? Besides theoretical perspicuity, empirical arguments have also been provided in favor of partial features. For instance, allowing dorsal features such as [high], [low], [front], and [back] to be undefined for a labial segment such as [p] captures the fact that the position of the tongue in the production of a medial [p] does not seem to have a specific articulatory target but simply executes the most convenient transition between the positions required for the two segments flanking [p] (Keating 1988). Finally, various approaches to phonological *underspecification* (Steriade 1995) have explored the assumption that certain segments lack a value for certain features at certain levels of phonological representation, either universally (*radical* underspecification: Kiparsky 1982, Archangeli 1984) or languagespecifically (*contrastive* or *archiphonemic* underspecification: Mester and Ito 1989, Inkelas 1995, Dresher 2009).

This article thus tackles the problem of extending Tesar’s theory of output-drivenness to a representational framework that allows for partial phonological features.^{1}Section 2 extends the notion of output-drivenness to a framework with partial features. A hierarchy of notions of outputdrivenness are considered, ordered by their strength. Section 3 (together with a final appendix) pinpoints the strongest notion of output-drivenness that allows Tesar’s guarantees for OT outputdrivenness to be extended from total to partial phonological features. Two approaches to identity faithfulness constraints relative to partial features are compared, which differ in whether disparities in feature definedness (the feature is defined for only one of two corresponding segments) are or are not penalized just like disparities in feature value (the feature assigns different values to two corresponding segments). Section 4 shows that the learnability implications uncovered by Tesar extend to a framework with partial phonological features only when phonological identity penalizes disparities in feature definedness as well. Section 5 concludes that choices pertaining to the proper definition of featural identity have nontrivial implications for learnability when Tesar’s framework is extended to partial features.

## 2 Output-Drivenness with Partial Features

This section extends Tesar’s notion of output-drivenness to a representational framework with partial features, leading to a hierarchy of notions of output-drivenness.

### 2.1 Output-Drivenness

Consider two different underlying forms **a** and **b** that share a surface form **x** among their candidates,^{2} so that both (**a**, **x**) and (**b**, **x**) count as *candidate pairs*. Suppose that the underlying form **b** is more similar to the surface form **x** than the underlying form **a** is. In other words, the candidate pair (**a**, **x**) has less *internal similarity* than the candidate pair (**b**, **x**). Tesar captures this assumption through the condition

(1) (

a,x) ≤_{sim}(b,x)

where ≤_{sim} is a *similarity order* properly defined among candidate pairs (or, more precisely, among candidate pairs that share the same surface form). Suppose that a grammar *G* maps the less similar underlying form **a** to the surface form **x**—namely, that *G*(**a**) = **x**. Intuitively, this means that **x** is phonotactically licit and that **x** is not too dissimilar from **a**. Since the phonotactic status of **x** does not depend on the underlying form and furthermore **x** is even more similar to **b** than it is to **a**, the grammar *G* should also map the more similar underlying form **b** to that same surface form **x**—namely, *G*(**b**) = **x**. Tesar calls any grammar that abides by this logic *outputdriven*.

**Definition 1 (Output-drivenness)***A grammar G is* output-driven *relative to a similarity order* ≤_{sim} provided the following condition holds

(2)

If G(a) =xand(a,x) ≤(_{sim}b,x),then G(b) =x

for any underlying/surface form candidate pairs(a,x)and(b,x)sharing the surface formx.

### 2.2 Representational Framework

The notion of output-drivenness is predicated on a similarity order ≤_{sim}. How should it be defined? In order to tackle this question conveniently, I make three restrictive assumptions on the representational framework. First, I assume that underlying and surface forms are strings of segments drawn from a finite segment set ∑ (e.g., ∑ is some set of segments from the IPA table), without any additional structure. Second, I assume that underlying and surface forms in a candidate pair (**a**, **x**) are strings of the same length—namely, that **a** = *a*_{1} . . . *a*_{ℓ} and **x** = *x*_{1} . . . *x*_{ℓ}. An underlying segment *a _{i}* and a surface segment

*x*to each other (in the sense of McCarthy and Prince 1993) provided they occupy the same position in the two corresponding strings

_{j}correspond**a**and

**x**—namely, provided that

*i*=

*j*. Thus, each underlying (surface) segment has one and only one corresponding surface (underlying) segment, so that epenthesis, deletion, coalescence, and breaking are not allowed (see Tesar 2014:chap. 2 for extension to a framework that allows for deletion and epenthesis, although not for breaking and coalescence). Finally, I assume that any pair of strings of the same length counts as a candidate pair, so that idiosyncratic restrictions on candidacy are not allowed (see Tesar 2014:sec. 3.3.1 for extension to a framework that relaxes this candidacy

*completeness*assumption).

### 2.3 Similarity Order in the Case of Total Features

Assume that the segments in the segment set ∑ are distinguished through certain phonological features collected together in the feature set Φ. A generic phonological feature *φ* is a function that takes a segment *x* and returns a certain feature value *φ*(*x*). Features can take two values (in the case of *binary* features) or more than two values (in the case of *multivalued* features). For instance, [voice] is a binary feature that only takes values + and − while [place] can be construed as a multivalued feature that takes the three values L for labial, C for coronal, and D for dorsal (see, e.g., de Lacy 2006:sec. 2.3.2.1.1). Tesar assumes that the features in the feature set Φ are all *total* (relative to ∑)—that is, defined for each segment in the segment set ∑. Under this assumption, he provides a definition of the similarity order that can be adapted as follows to the representational framework adopted here. The superscript *total* in the notation

*φ*is total.

^{3}

**Definition 2 (Similarity order with total features)***Consider two candidate pairs* (**a**, **x**) *and* (**b**, **x**) *that share the surface string***x** = *x*_{1} . . . *x*_{ℓ}*so that the two underlying strings***a***and***b***have the same length* ℓ—*namely*, *that they have the shape***a** = *a*_{1} . . . *a*_{ℓ}*and***b** = *b*_{1} . . . *b*_{ℓ}. *The relation* (**a**, **x**)

**b**,

**x**)

*holds provided the following disjunction holds*

(3)

Either: φ(b) =_{i}φ(x)_{i}

Or: φ(b) =_{i}φ(a)_{i}

for every i= 1, . . . , ℓand for every feature φ in the feature setΦ.

The first disjunct of (3) says that the segment *b _{i}* of the more similar underlying string

**b**patterns like the corresponding segment

*x*of the surface string

_{i}**x**relative to the feature

*φ*. Suppose that this disjunct fails because the feature

*φ*assigns different values to the two segments

*b*and

_{i}*x*. The second disjunct must then hold. It says that the segment

_{i}*a*of the less similar underlying string

_{i}**a**patterns like the segment

*b*of the more similar underlying string

_{i}**b**relative to the feature

*φ*. This entails in particular that, like

*b*,

_{i}*a*disagrees with

_{i}*x*relative to the feature

_{i}*φ*. The condition (

**a**,

**x**)

**b**,

**x**) thus indeed formally captures the intuition that the underlying string

**a**is at most as similar to the surface string

**x**as the underlying string

**b**is.

### 2.4 Graphical Representation of the Similarity Order

The disjunction (3) is the heart of Tesar’s definition 2 of the similarity order. Toward the goal of extending this definition to the case of partial features, it is useful to pause to introduce a graphical representation of this disjunction. To start, suppose that the feature *φ* that figures in this disjunction is total and binary—namely, that it assigns one of the two values + or − to any segment. In this case, Tesar’s disjunction can be stated as the requirement that the two pairs of feature values (*φ*(*a _{i}*),

*φ*(

*x*)) and (

_{i}*φ*(

*b*),

_{i}*φ*(

*x*)) be connected by an arrow in (4). Since

_{i}*φ*is binary, there are four pairs of feature values to consider. These four pairs are sorted into two groups separated by a vertical line, because we only compare pairs of feature values that share the second value. The straight arrows represent the first disjunct of (3). The loop arrows represent the second disjunct and thus ensure that the corresponding similarity order is reflexive.

Next, consider the case where the feature *φ* is again total but multivalued. For concreteness, assume that *φ* is the feature [place] and that it takes the three values L, C, and D. In this case, Tesar’s disjunction (3) can be stated as the requirement that the two pairs of feature values (*φ*(*a _{i}*),

*φ*(

*x*)) and (

_{i}*φ*(

*b*),

_{i}*φ*(

*x*)) be connected by an arrow in (5). Since the feature considered is three-valued, there are nine pairs of feature values to consider. These nine pairs are sorted into three groups separated by a vertical line, because we only compare pairs of feature values that share the second value. Again, as above, the straight arrows represent the first disjunct of (3) and the loop arrows represent the second disjunct.

_{i}In conclusion, Tesar’s definition 2 of the similarity order consists of two steps. One step defines a partial order among pairs of feature values, such as those represented in (4) or (5). The other step then “lifts” that partial order from pairs of feature values to pairs of strings by requiring the ordering to hold for the values assigned by each feature to each segment.

### 2.5 Similarity Orders in the Case of Partial Features

Tesar’s definition 2 of the similarity order

assumes that each feature in the feature set Φ is total—that is, defined for each segment in the segment set ∑. This assumption guarantees that it makes sense to consider and compare the values*φ*(

*a*),

_{i}*φ*(

*b*), and

_{i}*φ*(

*x*) assigned by any feature

_{i}*φ*to any of the segments

*a*,

_{i}*b*,

_{i}*x*involved. Consider now a slightly more general representational framework, which allows for a feature

_{i}*φ*to be

*partial*—that is, undefined for some segments in the segment set

*∑*. This assumption requires some tampering with the disjunction (3) at the heart of Tesar’s original definition 2, as a certain feature

*φ*could be undefined for one or more of the three segments

*a*,

_{i}*b*,

_{i}*x*. For concreteness, assume that

_{i}*φ*is binary. For the sake of visualization, let me represent the fact that

*φ*is undefined for a certain segment through the condition that

*φ*assigns it the “dummy” value 0 (see Hayes 2009:sec. 4.8). Three different possibilities are then plotted in (6).

Again, the loop arrows on each pair are needed in order for the resulting similarity order to be reflexive. The option (6a) ignores pairs that contain the dummy value 0 (apart from the loop arrows). In other words, it assumes that the dummy value cannot contribute to any feature disparity. This option represents the minimal extension of Tesar’s diagram (4). The option (6c) instead represents the maximal extension. It effectively assumes that the dummy value 0 can lead to disparities just as a third plain feature value does, as shown by comparison with Tesar’s diagram (5) for the three-valued feature [place]. In other words, it treats a binary partial feature analogously to a total three-valued feature. The intermediate option (6b) instead encodes the fact that 0 is not just a plain third feature value, by avoiding any straight arrows in the rightmost block.

The three disjunctions (7a), (7b), and (7c) in the definition below correspond to the three diagrams (6a), (6b), and (6c) when the feature *φ* is binary, in the sense that one of these three disjunctions holds provided the pairs of feature values (*φ*(*a _{i}*),

*φ*(

*x*)) and (

_{i}*φ*(

*b*),

_{i}*φ*(

*x*)) are connected by an arrow in the corresponding diagram. Again, the straight arrows represent the first disjunct and the loop arrows represent the second disjunct.

_{i}^{4}The three similarity orders thus obtained for a framework with partial features are distinguished by the superscripts

*sparse*,

*med(ium)*, and

*dense*, which intuitively reflect the relative number of pairs of values connected by straight arrows in the three diagrams (6a), (6b), and (6c). If the feature

*φ*is total, the three disjunctions (7a), (7b), and (7c) are equivalent to Tesar’s original disjunction (3). If all the features in the feature set Φ are total, the two definitions 2 and 3 thus define the same similarity order.

**Definition 3 (Similarity order with partial features)***Consider two candidate pairs* (**a**, **x**) *and* (**b**, **x**) *that share the surface string***x** = *x*_{1} . . . *x*_{ℓ}*so that the two underlying strings***a***and***b***have the same length* ℓ, *that is*, *have the shape***a** = *a*_{1} . . . *a*_{ℓ}*and***b** = *b*_{1} . . . *b*_{ℓ}. *The relation* (**a**, **x**)

**b**,

**x**), (

**a**,

**x**) (

**b**,

**x**),

*or*(

**a**,

**x**) (

**b**,

**x**)

*holds if and only if the corresponding disjunction (7a)*,

*(7b)*,

*or (7c) holds*.

(7)

a.

Either: Feature φ is defined for a,_{i}b,_{i}and x(_{i}and furthermore φb) =_{i}φ(x)._{i}

Or: Feature φ is undefined for both b(_{i}and a_{i}or else it is defined for both and furthermore φb) =_{i}φ(a)._{i}b.

Either: Feature φ is defined for b(_{i}and x_{i}and furthermore φb) =_{i}φ(x)._{i}

Or: Feature φ is undefined for both b(_{i}and a_{i}or else it is defined for both and furthermore φb) =_{i}φ(a)._{i}c.

Either: Feature φ is undefined for both b(_{i}and x_{i}or else it is defined for both and furthermore φb) =_{i}φ(x)._{i}

Or: Feature φ is undefined for both b(_{i}and a_{i}or else it is defined for both and furthermore φb) =_{i}φ(a)._{i}

*for every i* = 1, . . . , ℓ *and for every feature φ in the feature set* Φ.

### 2.6 A Hierarchy of Notions of Output-Drivenness

The three diagrams (6a), (6b), and (6c) contain an increasing number of arrows. As a result, the three corresponding similarity orders

, , and hold among an increasing number of candidate pairs. In other words, the implications in (8) hold for any two candidate pairs (**a**,

**x**) and (

**b**,

**x**) that share the surface form

**x**.

According to definition 1, a grammar *G* is output-driven relative to a similarity order ≤_{sim} provided it satisfies the following implication for any two candidate pairs (**a**, **x**) and (**b**, **x**): if *G* maps **a** to **x** and (**a**, **x**) ≤_{sim} (**b**, **x**), then *G* also maps **b** to **x**. Suppose that the similarity order ≤_{sim} is so sparse that it only holds of identical pairs: that is, (**a**, **x**) ≤_{sim} (**b**, **x**) if and only if **a** and **b** coincide. Obviously, any grammar is output-driven relative to this similarity order. In other words, this sparsest similarity order yields the weakest (namely, trivial) notion of output-drivenness. In general, the sparser the similarity order ≤_{sim} (i.e., the fewer candidate pairs there are in a similarity relation), the weaker the corresponding notion of output-drivenness. The implications (8) among the three similarity orders

In conclusion, (9) defines a hierarchy of notions of output-drivenness for a representational framework that allows for partial features. The rest of the article investigates OT guarantees for and learnability implications of these various notions of output-drivenness.

## 3 Establishing Output-Drivenness for OT Grammars

This section pinpoints the strongest notion of output-drivenness in the hierarchy (9) that allows Tesar’s guarantees for OT output-drivenness to be extended from total to partial phonological features. The discussion has subtle implications for the definition of featural identity.

### 3.1 Featural Identity in the Case of Total Features

Conditions on the output-drivenness of OT grammars take the form of conditions on the constraint set used to define them. Since I am assuming that underlying (surface) segments have one and only one surface (underlying) correspondent, the only relevant faithfulness constraints are featural identity constraints (McCarthy and Prince 1993). To start, assume that the features in the feature set Φ are all total relative to the segment set ∑. The definition of the *identity* faithfulness constraint IDENT* _{φ}* corresponding to a total feature

*φ*is recalled in (10).

Clause (10a) defines the faithfulness constraint IDENT* _{φ}* for a pair (

*a*,

*x*) of underlying/surface segments: it is only violated if the feature

*φ*assigns different values

*φ*(

*a*) and

*φ*(

*x*) to the two segments

*a*and

*x*. Clause (10b) extends the definition from segments to strings, by summing over pairs of corresponding segments. Clause (10b) only considers pairs of underlying and surface strings of the same length, as required for them to form a candidate pair.

### 3.2 Establishing OT Output-Drivenness in the Case of Total Features

For which constraint sets does it happen that the OT grammars corresponding to all constraint rankings all qualify as output-driven? Under the current assumption that the features in the feature set Φ are all total, we focus on output-drivenness relative to the similarity order

provided by Tesar’s definition 2 (the other similarity orders provided by definition 3 collapse with when the features are all total). Tesar’s answer to this question is recalled in theorem 1, adapted to the restrictive representational framework considered here (see Tesar 2014:chaps. 2 and 3 for extension to a framework that allows for epenthesis and deletion).**Theorem 1 (Output-drivenness with total features)***Assume that the faithfulness constraint set consists of the identity faithfulness constraints (10) relative to a set* Φ *of features that are all total (either binary or multivalued). The OT grammar corresponding to any ranking of the constraint set is output-driven relative to the similarity order*

*provided by definition 2*.

Tesar’s theorem 1 makes no assumptions on the markedness constraints. Only the faithfulness constraints matter for output-drivenness. If in particular the faithfulness constraints are all of the identity-type (10), output-drivenness holds.

### 3.3 Featural Identity in the Case of Partial Features: Two Options

The definition (10) of the identity faithfulness constraint IDENT* _{φ}* assumes the feature

*φ*to be total. How should this definition be adapted when the feature

*φ*is partial? Two options are readily available. According to the

*stronger*definition (11a), the identity faithfulness constraint penalizes corresponding segments that differ either in feature definedness (the feature is defined for only one of the two segments) or in feature value (the feature is defined for both segments but assigns them different values). According to the

*weaker*definition (11b), the identity faithfulness constraint only penalizes pairs of corresponding segments that differ in feature value, but it assigns no violations when the feature is defined for only one of the two segments. Both definitions are trivially extended from pairs of corresponding segments to pairs of corresponding strings (of the same length) through the same clause as (10b).

The labels *strong* and *weak* capture the fact that

Which of these two definitions should be adopted? As recalled in section 1, partial features arise in the theory of phonological underspecification or have been motivated by phonetic considerations. The phonetic literature is silent on the issue of the proper definition of identity constraints. Turning to underspecification theory, Harrison and Kaun (2001) entertain the hypothesis that Hungarian has an underlying vowel /E/ underspecified for quality. They consider the identity constraint IDENT_{[quality]} and assume that it is weakly defined: it does not penalize candidates that put the underlying underspecified vowel /E/ in correspondence with a fully specified surface vowel such as [e] or [æ]. As another example, Colina (2013) assumes that some dialects of Galician have a voiced velar obstruent G underspecified for continuancy. She considers the identity constraint IDENT_{[cont]} and assumes that it is weakly defined—namely, that “outputs underspecified for continuancy do not violate IDENT_{[cont]} due to the fact that there is no continuancy specification that could correspond to the input” (p. 90). Yet this endorsement of the weak definition (11b) of featural identity seems not supported by substantial arguments: indeed, the examples just mentioned turn out to be consistent with the strong construal (11a) as well.^{5} This article will show that learnability considerations instead do bear on the choice between weak and strong featural identity, favoring the latter.

### 3.4 Establishing OT Output-Drivenness in the Case of Partial Features

Tesar’s output-drivenness theorem 1 assumes that the faithfulness constraint set consists of identity faithfulness constraints that are all relative to total features. How does the theorem extend to partial features? The answer is provided by the following theorem 2. Tesar’s output-drivenness theorem 1 extends to a representational framework with partial features when the strong notion (11a) of featural identity is adopted, no matter which of the three options

or or is used to extend Tesar’s original similarity order . The situation is different when the weak notion (11b) of featural identity is adopted instead: the extension holds only for and ; it fails for the strongest notion of output-drivenness corresponding to .**Theorem 2 (Output-drivenness with partial features)***Assume that the faithfulness constraint set consists of the identity faithfulness constraints relative to a set* Φ *of possibly partial phonological features*. **(A)** If the strong definition (11a) of featural identity is adopted, the OT grammar corresponding to any ranking of the constraint set is output-driven relative to any of the similarity orders

*or*

*provided by definition 3*.

*,*

**(B)**If the weak definition (11b) of featural identity is adopted*the OT grammar corresponding to any ranking of the constraint set is output-driven relative to either similarity order*

*or*

*but not relative to*.

The proof of theorem 2A is straightforward. In fact, given a feature *φ* that is partial and *n*-valued, consider the corresponding feature

*φ*when the latter is defined and otherwise assigns the “dummy value” 0 (

*dummy*means that 0 does not figure among the

*n*values taken by the original feature

*φ*). Obviously, is a total and (

*n*+ 1)-valued feature. Furthermore, the strong identity constraint defined as in (11a) out of the partial and

*n*-valued feature

*φ*is identical to the faithfulness constraint defined as in (10) out of the total and (

*n*+ 1)-valued feature . Let be the feature set obtained from the original feature set Φ by replacing each partial feature

*φ*with the corresponding total feature . Obviously, the similarity order constructed by definition 3 out of the original partial feature set Φ coincides with the similarity order constructed by definition 2 out of the derived total feature set . Tesar’s theorem 1 ensures that, if identity constraints are defined in terms of strong featural identity (11a), the resulting OT grammars are all output-driven relative to . Finally, output-drivenness relative to and follows from the hierarchy (9).

Turning to theorem 2B, appendix A.3 provides a proof that output-drivenness holds relative to

and in the case of weak featural identity. This proof is a simple extension of Tesar’s proof of the original theorem 1, which is in turn greatly simplified, taking advantage of the simplified representational framework assumed here. The rest of this section provides a counterexample that output-drivenness instead fails relative to the similarity order in the case of weak featural identity. The segment set ∑ consists of the four segments*ð*,

*z*,

*b*, and

*f*. The feature set Φ consists of the single feature

*φ*= [strident]. This feature is assumed to be defined only for coronals (see, e.g., Hayes 2009), in the present case only for

*ð*(which has the value −) and

*z*(which has the value +), not for

*b*and

*f*. The constraint set contains only one faithfulness constraint—namely, the weak identity faithfulness constraint defined as in (11b). The constraint set furthermore contains two markedness constraints: *NONSIBFRIC, which encodes a preference for sibilants among fricatives (and thus punishes [ð] and [f]); and *LABIAL, which encodes the markedness of labial place compared to coronal place (and thus punishes [b] and [f]). The two tableaux in (12) show a ranking that maps /ð/ to [b], because the two nonsibilant fricatives [ð] and [f] are ruled out by *NONSIBFRIC and the sibilant fricative [z] is ruled out by . The underlying form /f/ is instead mapped to [z], because the feature [strident] is undefined for /f/ and the mapping to the strident [z] thus does not violate by virtue of the weak construal of featural identity.

Let [b] play the role of the surface string **x** and let /ð/ and /f/ play the roles of the two underlying strings **a** and **b**, respectively, as stated in (13). The similarity order

**a**,

**x**) and (

**b**,

**x**) thus defined, as stated in (13). In fact, this similarity order is defined in terms of the disjunction (7c). The second disjunct of this disjunction fails, because the feature

*φ*= [strident] is defined for /ð/ (which plays the role of the unique segment of

**a**) but is undefined for /f/ (which plays the role of the unique segment of

**b**). Yet the first disjunct does hold, because the feature

*φ*= [strident] is undefined for both /f/ (which plays the role of the unique segment of

**b**) and [b] (which plays the role of the unique segment of

**x**).

The OT grammar described in (12) is thus not output-driven relative to

, since it maps**a**= /ð/ but not

**b**= /f/ to

**x**= [b], even though

**b**= /f/ is more similar to

**x**= /b/ than

**a**= /ð/ is, when similarity is measured through .

^{6}The problem illustrated here falls under Tesar’s (2014:chap. 4) pattern of

*distinction only at lesser similarity*: for the lesser similarity input

**a**= /ð/, the faithfulness constraint distinguishes between the candidates (/ð/, [b]) and (/ð/, [z]); while for the greater similarity input

**b**= /f/, the constraint does not distinguish between (/f/, [b]) and (/f/, [z]), because /f/ is undefined for stridency.

The following section explores the learnability implications of this conclusion that partial features threaten output-drivenness according to weak featural identity. This result is also relevant in its own right, as it contributes to an OT literature that accounts for opaque (and thus nonoutput-driven) child patterns (such as chain shifts) through the assumption that the child temporarily entertains underlying representations that are underspecified for certain features (see Dinnsen and Barlow 1998 and references therein).

## 4 Learnability Implications of Output-Drivenness

This section shows that, within the hierarchy (9) of notions of output-drivenness, only outputdrivenness relative to

is strong enough for the learnability implications uncovered by Tesar (2014) to extend from total to partial phonological features.### 4.1 Inconsistency Detection

Tesar (2014:chaps. 7, 8) focuses on the implications of output-drivenness for the following classical formulation of the language-learning problem (for a more detailed formulation of this problem, see Merchant 2008:sec. 1.1, Tesar 2014:sec. 6.2, and Magri 2015:sec. 2). Adhering to the generative perspective, a language learner is granted full knowledge of the typological space. The typology is defined in OT terms—namely, through all possible rankings of a given constraint set. The learner’s training data consist of a set {**x**_{1}, **x**_{2}, . . . } of surface forms sampled from the target language. The learner’s task is to infer simultaneously a lexicon of corresponding underlying forms {**a**_{1}, **a**_{2}, . . . } and a constraint ranking such that the OT grammar corresponding to that ranking maps each of the inferred underlying forms **a**_{1}, **a**_{2}, . . . to the corresponding given surface forms **x**_{1}, **x**_{2}, . . . .^{7}

Knowledge of the target ranking would help the learner to reverse-engineer the target lexicon of underlying forms. On the other hand, knowledge of the target lexicon of underlying forms would allow the learner to easily infer the target ranking from the underlying/surface form mappings. The challenge raised by the learning problem is that both the lexicon of underlying forms and the ranking are unknown and thus need to be learned simultaneously. A natural strategy to cope with this challenge is to maintain partial lexical and grammatical/ranking information and to increment them iteratively by “boosting” one type of partial information with the other. Ignoring for the moment issues of algorithmic efficiency, this approach can be implemented through the scheme (14), explained below.

The learner represents its current ranking information through a set *R* of currently *admissible* constraint rankings, which is initialized to the set of all possible rankings. The learner represents its current lexical information through a set *L*** _{x}** of currently

*admissible*underlying forms for each training surface form

**x**, which is initialized to the set of all possible underlying forms for

**x**. The learner then enriches the current ranking and lexical information by iterating the following two steps. One step extracts ranking information (

*ERI step*): the learner tries to zoom closer to the target constraint ranking, by eliminating from the set

*R*of currently admissible rankings any rankings that are inconsistent with the current lexical information. The other step extracts lexical information (

*ELI step*): the learner tries to zoom closer to the target underlying forms, by eliminating from the set

*L*

**of currently admissible underlying forms for a training surface form**

_{x}**x**any underlying forms that are inconsistent with the current ranking information. The learner iterates the two ERI and ELI steps until, hopefully, the set

*R*of admissible rankings has been reduced to just one ranking (or to just a few rankings, which all capture the target grammar), and the set

*L*

**of admissible underlying forms for each training surface form**

_{x}**x**has been reduced to just one underlying form (or just a few underlying forms that differ only with respect to features that are not contrastive). Because both the ERI and the ELI steps prune inconsistent options, the resulting learning scheme (14) is called

*inconsistency detection*(Kager 1999, Tesar et al. 2003, Merchant 2008, Tesar 2006, 2014).

### 4.2 Speeding Up the ERI Step

At the ERI step, the learner eliminates from the current set *R* of admissible rankings any ranking that is inconsistent with the current lexical information. A ranking is inconsistent with the current lexical information provided there exists some training surface form **x** such that every corresponding admissible underlying form in *L*** _{x}** fails to be mapped to

**x**by that ranking. The ERI step can thus be made explicit as in (15).

(15)

ERI step:Eliminate fromRany ranking whose corresponding OT grammar maps every underlying form inLinto a surface form different from_{x}x.

Tesar (2014) shows that this formulation (15) of the ERI step can be hugely simplified when the OT typology explored by the learner consists of grammars that are all output-driven relative to a similarity order ≤_{sim}. In fact, assume that the lexicon *L*** _{x}** of currently admissible underlying forms for each training surface form

**x**admits a most similar underlying form when similarity is measured through the similarity order ≤

_{sim}, as stated in (16).

(16) There exists a

most similar admissible underlying form; that is, there exists an underlying formbthat belongs toLand satisfies the similarity inequality (_{x}a,x) ≤_{sim}(b,x) for every other underlying formainL._{x}

The ERI step (15) can then be equivalently reformulated as in (17). In fact, suppose that a ranking is eliminated by the original ERI step (15). This means that the corresponding grammar fails on every underlying form in *L*** _{x}**—that is, maps it to a surface form different from

**x**. Hence, that grammar fails in particular on the underlying form

**b**, as

**b**belongs to

*L*

**. The ranking considered is thus also eliminated by the reformulated ERI step (17). Vice versa, assume by contradiction that there is some ranking that is eliminated by the reformulated ERI step (17) but not by the original ERI step (15). Since that ranking is not eliminated by the original ERI step (15), there exists at least one admissible underlying form**

_{x}**a**in

*L*

**that is mapped to**

_{x}**x**by the corresponding OT grammar. Since that grammar is ≤

_{sim}-output-driven (because of the assumption that all grammars in the typology are ≤

_{sim}-output-driven) and since (

**a**,

**x**) ≤

_{sim}(

**b**,

**x**) (because of the assumption (16) that

**b**is the most similar admissible underlying form), that grammar must also map

**b**to

**x**. The ranking considered could thus not have been eliminated by the reformulated ERI step (17).

(17)

ERI step (reformulation based on output-drivenness):Eliminate fromRany ranking whose corresponding OT grammar mapsbto a surface form different fromx, wherebis the most similar underlying form inL, which exists by (16)._{x}

The original ERI step (15) requires the learner to look at all the admissible underlying forms in *L*** _{x}**. The reformulation (17) achieves the same net result by looking at only one—namely, the most similar one. This simplification is substantial. In fact, there are no implementations of the original ERI step (15) that are efficient.

^{8}The reformulation (17) can instead be executed efficiently.

^{9}This is indeed the most spectacular learnability implication of Tesar’s notion of outputdrivenness (see Magri 2015 for discussion).

### 4.3 Existence of the Most Similar Admissible Underlying Form in the Case of Total Features

In order to benefit from this spectacular speed-up, we need to establish the assumption (16) that the set *L*** _{x}** of currently admissible underlying forms for the training surface form

**x**contains a most similar underlying form. This set

*L*

**is initialized to the entire set of underlying forms for**

_{x}**x**and then iteratively pruned at the ELI step. Within the restrictive representational framework adopted in this article, the surface form

**x**is a segment string

**x**=

*x*

_{1}. . .

*x*

_{ℓ}of some length ℓ. The set

*L*

**is thus initialized to the set of all underlying strings**

_{x}**a**=

*a*

_{1}. . .

*a*

_{ℓ}of the same length ℓ (because candidate pairs consist of strings of the same length). The ELI step then works segment by segment, feature by feature: at each iteration, the ELI step tries to set the value of one of the features in Φ for one of the ℓ underlying segments, and eliminates from

*L*

**x**all the underlying strings that have a different value for that feature and that segment.

With this small amount of background on the current set *L*** _{x}** of admissible underlying forms, we can now take a closer look at the crucial assumption (16) that

*L*

**admits a most similar underlying form. Assume for now that the features in the feature set Φ are all total and thus consider Tesar’s (2014) original similarity order**

_{x}**b**=

*b*

_{1}. . .

*b*

_{ℓ}defined segment by segment and feature by feature as in (18).

^{10}

(18)

- a.
If a feature

φhas not yet been set for theith underlying segment, thenbhas the same value for that feature as the_{i}ith surface segmentx._{i}- b.
If a feature

φhas already been set to a certain value for theith underlying segment, thenbhas that value for that feature._{i}

The form **b** defined in (18) validates the crucial assumption (16) because it is the admissible underlying form most similar to **x**. In fact, condition (18b) says that the form **b** respects all the feature values that have been set so far and thus ensures that **b** belongs to *L*** _{x}** and thus counts as admissible. Furthermore, the underlying form

**b**is most similar to

**x**when similarity is measured relative to

**a**=

*a*

_{1}. . .

*a*

_{ℓ}in the set

*L*

**. If a feature**

_{x}*φ*has not yet been set for the

*i*th segment, clause (18b) guarantees that the two segments

*b*and

_{i}*x*have the same value for feature

_{i}*φ*. This validates the first disjunct of the disjunctive definition (3) of the similarity order . If a feature

*φ*has instead been set to a certain value for the

*i*th segment, then the segment

*b*has that value by clause (18b) and the segment

_{i}*a*shares that same value (because underlying forms whose

_{i}*i*th segment has a different value have been pruned from

*L*

**), validating the second disjunct of the definition (3) of .**

_{x}### 4.4 Existence of the Most Similar Admissible Underlying Form in the Case of Partial Features

Let me now consider the case where the feature set Φ instead contains partial features. As noted in the preceding section, in this case we have at our disposal the hierarchy of similarity orders

, , and provided by definition 3. The same reasoning as above shows that the form**b**defined analogously to (18) satisfies the crucial assumption (16) when similarity is measured relative to . This is not surprising, as it was noted above that the similarity order represented in (6c) effectively treats the dummy value 0 that marks undefinedness as a plain feature value (e.g., it treats a partial binary feature as a total ternary feature). The situation is very different for the other two similarity orders and : these similarity orders are too sparse to validate the crucial assumption (16) of a most similar admissible underlying form. As an illustration, let’s focus on the latter similarity order and consider the following minimal counterexample. Consider the feature

*φ*= [strident] and assume it is defined only for coronals (see, e.g., Hayes 2009). Consider a surface form

**x**=

*x*

_{1}. . .

*x*

_{ℓ}whose

*i*th segment

*x*is not coronal and is thus not assigned any value by the feature

_{i}*φ*. Suppose that the feature

*φ*has not been valued yet for the

*i*th segment. Thus,

*L*

**contains in particular (a) an underlying form**

_{x}**a′**= . . . whose

*i*th segment is not coronal and is thus undefined for stridency; (b) an underlying form

**a″**= . . . whose

*i*th segment is a strident coronal; and (c) an underlying form

**a‴**= . . . , whose

*i*th segment is a nonstrident coronal. We thus need in particular to compare the three candidate pairs (

**a′**,

**x**), (

**a″**,

**x**), and (

**a‴**,

**x**) relative to the similarity order . This requires in particular checking whether any two of the pairs of feature values (

*φ*,

*φ*(

*x*)), (

_{i}*φ*(),

*φ*(

*x*)), and (

_{i}*φ*(),

*φ*(

*x*)) are connected through a straight arrow in the diagram (6b). Since the feature

_{i}*φ*is undefined for the surface segment

*x*, the rightmost block of this diagram applies. But that block has no straight arrows. In other words, is too sparse and establishes no similarity relations among (

_{i}**a′**,

**x**), (

**a″**,

**x**), and (

**a‴**,

**x**). The crucial assumption (16) of a most similar admissible underlying form thus fails when similarity is measured relative to . And the speed-up promised by output-drivenness is thus lost. Analogous considerations hold for the similarity order .

## 5 Conclusion

Suppose that a certain underlying form **a** is mapped to a certain surface form **x**. Suppose that another underlying form **b** is at least as similar to **x** as **a** is. Output-drivenness then requires that the underlying form **b** be mapped to that surface form **x** as well. Output-drivenness is thus predicated on a notion of similarity that is formalized through a similarity order ≤_{sim} defined among candidate pairs (**a**, **x**) and (**b**, **x**) that share the surface form **x**. Tesar (2014) develops a theory of output-drivenness under the assumption that all phonological features are total. Under this assumption, he manages to define the similarity order in such a way that it is not too strong, so that the corresponding notion of output-drivenness can be guaranteed for large typologies of OT grammars. At the same time it is not too weak, so that the corresponding notion of outputdrivenness has substantial learnability implications—for instance, by validating the crucial condition (16) that lexicons of admissible underlying forms defined by feature-based inconsistency detection admit a unique most similar underlying form.

This article has tackled the problem of extending output-drivenness to a framework that allows for partial features. Within this framework, the intuitive notion of similarity that underlies output-drivenness can be formalized through a hierarchy of similarity orders

, , and , defined in section 2. They yield a hierarchy of corresponding notions of output-drivenness, ordered by their relative strength. Section 3 has established that output-drivenness relative to the similarity orders and holds regardless of the details of the proper definition of featural identity. That is instead not the case for the stronger notion of output-drivenness relative to the similarity order : it requires the strong definition (11a) of featural identity that also penalizes disparities in feature definedness, while the weak definition (11b) of featural identity does not suffice, because it only penalizes disparities in feature value. The difference between these two approaches to featural identity seems not to have been discussed in the phonological literature. Yet it is consequential for phonological learnability: output-drivenness relative to the similarity orders and is too weak to support the extension of Tesar’s learnability implications from total to partial features, as shown in section 4. The stronger notion of output-drivenness relative to is needed instead, as that is the only similarity order that guarantees the crucial condition (16). In conclusion, the extension of output-drivenness from total to partial phonological features has subtle implications for the notion of featural identity: the strong construal of featural identity is needed, because the weak construal leads to a mismatch between what is needed (the stronger notion of output-drivenness relative to required for learnability) and what can be afforded (the weaker notion of output-drivenness relative to or that is required for guarantees on OT output-drivenness).^{11}

## Appendix

This appendix provides a proof of theorem 2B, which extends theorem 1 to partial phonological features. For convenience, the proof is broken up into three steps, corresponding to lemmas 1–3. The proof is a straightforward extension of the analysis developed in Tesar 2014:chap. 3, adapted to the restrictive phonological framework considered here. Throughout this appendix, Ʃ is a finite set of segments and Φ is a finite set of segmental features, which can be either binary or multivalued, total or partial. A candidate pair is any pair of strings of segments of the same length.

### A.1 The Faithfulness Output-Drivenness Condition

Lemma 1 guarantees output-drivenness of the OT grammar corresponding to any ranking of a constraint set whose faithfulness constraints all satisfy the *Faithfulness Output-Drivenness Condition* (FODC) stated in (19). No assumptions are made on the markedness constraints or on the similarity order. The proof is taken from Tesar 2014:sec. 3.2.

**Lemma 1***Consider an arbitrary similarity order* ≤* _{sim} among candidate pairs. Assume that every faithfulness constraint F in the constraint set satisfies condition (19) for any candidate pairs* (

**a**,

**x**)

*and*(

**b**,

**x**)

*such that*(

**a**,

**x**) ≤

*(*

_{sim}**b**,

**x**)

*and any string*

**y**

*of the same length as*

**x**.

(19)

a.

If F(b,y) <F(b,x),then F(a,y) <F(a,x).b.

If F(a,x) <F(a,y),then F(b,x) <F(b,y).

*Then, the OT grammar corresponding to any ranking of the constraint set is output-driven relative to that similarity order* ≤* _{sim}*.

*Proof*. Assume that the OT grammar corresponding to a certain ranking maps the less similar underlying string **a** to the surface string **x**. Let me show that it then also maps the more similar underlying string **b** to that same surface string **x**. This means that I have to show that the candidate pair (**b**, **x**) beats any other candidate pair (**b**, **y**) according to that ranking. The assumption that **a** is mapped to **x** entails in particular that the candidate pair (**a**, **x**) beats the candidate pair (**a**, **y**). This means in turn that one constraint *C* that prefers the winner pair (**a**, **x**) to the loser pair (**a**, **y**) is ranked above every constraint *C*_{1}, *C*_{2}, . . . that instead prefers (**a**, **y**) to (**a**, **x**), as represented in (20).

If the constraint *C* top-ranked in (20) is a markedness constraint, then it does not care whether the underlying form is **a** or **b**. The fact that it prefers (**a**, **x**) to (**a**, **y**) thus entails that it also prefers (**b**, **x**) to (**b**, **y**). If instead *C* is a faithfulness constraint, this entailment is guaranteed by the FODC (19b). The ranking conditions (20) can thus be updated as in (21).

Consider a constraint that incorrectly prefers (**b**, **y**) to (**b**, **x**). If it is a markedness constraint, then again it also prefers (**a**, **y**) to (**a**, **x**); that is, it is one of the constraints *C*_{1}, *C*_{2}, . . . ranked at the bottom of (21). If instead it is a faithfulness constraint, that same conclusion is guaranteed by the FODC (19a). The ranking conditions (21) can thus be updated as in (22).

The ranking conditions (22) say that (**b**, **x**) wins over (**b**, **y**), as a constraint *C* that prefers (**b**, **x**) to (**b**, **y**) is ranked above every constraint *C*_{1}, *C*_{2}, . . . that instead prefers (**b**, **y**) to (**b**, **x**). Since this conclusion holds for any candidate **y**, the OT grammar considered maps the more similar underlying string **b** to the surface string **x**, as required by output-drivenness. □

### A.2 Simplifying the Faithfulness Output-Drivenness Condition

Lemma 1 states the FODC (19) for an arbitrary faithfulness constraint *F*. Lemma 2 specializes this condition to identity faithfulness constraints. This lemma and the following lemma 3 hold regardless of the choice between the strong construal (11a) and the weak construal (11b) of the identity constraint IDENT* _{φ}* relative to a partial feature

*φ*. Yet since the case of the strong construal is already covered by theorem 2A, I focus here on the case of the weak construal. The superscript

*weak*introduced in section 3.3 is suppressed to simplify the notation.

**Lemma 2***Consider the weakly defined identity faithfulness constraint* IDENT* _{φ} relative to a feature φ possibly partial relative to the segment set* Ʃ.

*The FODC (19) specializes to this specific case F*= IDENT

_{φ}as follows:*where the sums run over the set I of those indices i* = 1, . . . , ℓ *where the two surface strings***x** = *x*_{1} . . . *x*_{ℓ}*and***y** = *y*_{1} . . . *y*_{ℓ}*disagree relative to the feature φ:*

*Proof*. Focus on the first FODC (19a); an analogous reasoning holds for the second FODC (19b). By (10)/(11), the identity faithfulness constraint IDENT* _{φ}* is defined for a candidate pair of strings by summing over pairs of corresponding segments. The first FODC (19a) can thus be made explicit as in (25) when the faithfulness constraint

*F*is IDENT

*.*

_{φ}In step (26a) and in (27), the sum over {1, . . . , ℓ} has been split into two sums over *I* and over its complement. In step (26b), *y _{i}* has been replaced with

*x*in the second sum, which runs over the complement of

_{i}*I*, because

*x*and

_{i}*y*agree relative to the feature

_{i}*φ*for every index

*i*∉

*I*and the faithfulness constraint IDENT

*thus cannot distinguish between them.*

_{φ}Because of (26) and (27), the inequality in the antecedent of (25) is equivalent to the inequality in the antecedent of (23a), as the quantity

IDENT*(*

_{φ}*b*,

_{i}*x*) appears on both sides and can thus be ignored. An analogous reasoning shows that the inequality in the consequent of (25) is equivalent to the inequality in the consequent of (23a). □

_{i}### A.3 Verifying the Faithfulness Output-Drivenness Condition

The following lemma 3 guarantees that identity faithfulness constraints satisfy the FODC (23) when the similarity order is properly defined, thus completing the proof of theorem 2.

**Lemma 3***The weakly defined identity faithfulness constraint* IDENT* _{φ} relative to any (possibly partial) feature φ in* Φ

*satisfies the FODC (23) relative to the similarity orders*

*or*

*provided by definition 3*.

*Proof*. Focus on the first FODC (23a); an analogous reasoning holds for the second FODC (23b). Furthermore, focus on the similarity order

(28)

- a.
Either: Feature

φis defined forband_{i}xand furthermore_{i}φ(b) =_{i}φ(x)._{i}- b.
Or: Feature

φis undefined for bothband_{i}aor else it is defined for both and furthermore_{i}φ(b) =_{i}φ(a)._{i}

Each of the two disjuncts (28a) and (28b) individually entails each of the two implications (29). That is obvious for the second disjunct (28b), as it says that the two segments *a _{i}* and

*b*do not differ relative to the feature

_{i}*φ*and thus cannot be distinguished by the faithfulness constraint IDENT

*. Furthermore, the first disjunct (28a) entails the implication (29a) because it ensures that its consequent is true—namely, that IDENT*

_{φ}*(*

_{φ}*b*,

_{i}*y*) = 1. In fact, feature

_{i}*φ*is defined for segment

*y*, because otherwise the antecedent IDENT

_{i}*(*

_{φ}*a*,

_{i}*y*) = 1 would be false according to the weak construal (11b) of featural identity. Feature

_{i}*φ*is also defined for segment

*b*, as ensured by the disjunct (28a). Finally, the two segments

_{i}*b*and

_{i}*y*disagree relative to the feature

_{i}*φ*, because the disjunct (28a) says that

*φ*(

*b*) =

_{i}*φ*(

*x*) and the hypothesis that

_{i}*i*∈

*I*says that

*φ*(

*x*) ≠

_{i}*φ*(

*y*). Finally, the first disjunct (28a) also entails the implication (29b) because it ensures that its antecedent is false—namely, that IDENT

_{i}*(*

_{φ}*b*,

_{i}*x*) = 0. In fact, the disjunct (28a) says in particular that

_{i}*φ*(

*b*) =

_{i}*φ*(

*x*), so that the segment pair (

_{i}*b*,

_{i}*x*) does not violate IDENT

_{i}

_{φ}(29) For every

i∈I:

- a.
if IDENT

(_{φ}a,_{i}y) = 1, then also IDENT_{i}(_{φ}b,_{i}y) = 1;_{i}- b.
if IDENT

(_{φ}b,_{i}x) = 1, then also IDENT_{i}(_{φ}a,_{i}x) = 1._{i}

The chain of inequalities in (30) finally shows that the faithfulness constraint IDENT* _{φ}* indeed satisfies the first FODC (23a). Steps (30a) and (30c) are guaranteed by the implications (29a) and (29b), respectively. Step (30b) is guaranteed by the antecedent of the FODC (23a).

The logical structure of this proof can be highlighted as follows. The FODC (23a) for the identity faithfulness constraint IDENT* _{φ}* only looks at those indices

*i*∈

*I*where the two candidates

**x**and

**y**differ for the feature

*φ*, yielding the two reverse implications (29a) and (29b). The lemma trivially follows from these two implications, as shown in (30). □

## Notes

^{1} Another dichotomy that has figured prominently in the literature on feature systems is the one between *univalent*/*privative* features and *binary*/*multivalued* features. Although crucial within derivational frameworks, this distinction seems to play only a marginal role within OT. For instance, Wetzels and Mascaró (2001:237) address the issue of whether the feature [voice] is privative or binary and conclude that “it is . . . much harder to argue against privativity in OT than it is in derivational phonology. This is due to the fact that with the IDENT and AGREE constraint families it is as easy to refer to the absence of a privative feature as it is to refer to the unmarked value of a binary feature.” Since Tesar’s theory of output-drivenness is cast within OT, the extension of the theory to privative features is a less pressing issue than its extension to partial features.

^{2} Boldfaced letters from the beginning of the alphabet (**a**, **b**, . . . ) and from the end of the alphabet ( . . . **x**, **y**, **z**) are used to denote underlying and surface forms, respectively.

^{3} The relation is

^{4} As noted above, each of the nodes in the diagrams (6a), (6b), and (6c) must come with a loop arrow in order to ensure reflexivity of the resulting similarity relation. Hence, the three disjunctions in (7) all share the same second disjunct corresponding to the loop arrows. The three disjunctions only differ with respect to their first disjunct, corresponding to the straight arrows.

^{5} Indeed, nothing would change in tableaux E and F of Harrison and Kaun 2001:221–222 if

^{6} The similarity orders

**a**,

**x**) and (

**b**,

**x**). In fact, is defined in terms of the disjunction (7b). The second disjunct of this disjunction again fails. And the first disjunct fails as well, because it requires in particular that the feature

*φ*= [strident] be defined for both /f/ (which plays the role of the unique segment of

**b**) and [b] (which plays the role of the unique segment of

**x**), which is not the case. The fact that the OT grammar described in (12) maps

**a**= /ð/ but not

**b**= /f/ to

**x**= [b] is not an obstacle to its output-drivenness relative to or , since

**b**= /f/ is not more similar to

**x**= /b/ than

**a**= /ð/ is, when similarity is measured through the more demanding standards of or .

^{7} Strictly speaking, the learning problem thus formulated is trivial: it admits the trivial solution where the underlying forms are all identical to the given surface forms and the OT grammar corresponds to a ranking with faithfulness constraints at the top. Indeed, more precise formulations of this learning problem considered in the literature rule out this trivial solution through additional conditions. First, the problem formulation is usually refined with some restrictiveness condition that rules out the choice of a grammar with faithfulness at the top (Tesar 2014:chap. 9). Second, the problem formulation is usually refined with conditions on the lexicon of underlying forms that rule out the choice of fully faithful underlying forms. For instance, the learner is typically assumed to be trained not on a set {**x**_{1}, **x**_{2}, . . . } of unadorned surface forms but on a paradigm of morphologically decomposed surface forms annotated with the corresponding meanings. For instance, a learner of Dutch would be trained on surface forms such as **x**_{1} = [pɑt + Ø] and **x**_{2} = [pɑd + ən], split up into stem and suffix and annotated with the corresponding meanings TOADSG and TOADPL. These meanings say that the two surface strings **x**_{1}, **x**_{2} correspond to two underlying strings that share the underlying stem. Since the two surface stems differ in voicing of their final consonant, the final consonant of the shared underlying stem must be nonfaithful relative to one of the two surface stems. To simplify the presentation and avoid orthogonal issues, this section focuses on the bare formulation of the learning problem, without additional conditions on the grammars or the underlying forms.

^{8} Let me clarify this statement. So far, I have assumed the learner to represent its current ranking information in terms of a set *R* of currently admissible rankings that is initialized to the set of all possible rankings and then pruned iteratively at the ERI step. This is of course unfeasible: there are just too many rankings! A more compact representation of the current ranking information is needed. A natural idea is to represent a possibly large set *R* of constraint rankings through a small set of ranking conditions. Prince (2002) develops the formalism of *elementary ranking conditions* (ERCs), and Merchant (2008) suggests using that specific format to compactly represent the learner’s ranking information. Thus, the learner starts from an empty set of ERCs, corresponding to the set of all constraint rankings. And the ERI step (15) updates the current set of ERCs by enriching it with the additional set of ERCs corresponding to the set of rankings that map at least one underlying form in *L*** _{x}** to the surface form

**x**. Merchant (2008) develops a sophisticated technique to tackle the difficult problem of computing the latter set of ERCs. Unfortunately, his technique relies on the operation of

*fusional closure*. There are no results (that I am aware of) on the efficient computation of fusional closure, although it is unlikely that it can be computed efficiently in the general case. In conclusion, the price to be paid for compressing the data structure from sets of constraint rankings to sets of ERCs is that the original formulation of the ERI step (15) becomes computationally demanding.

^{9} Let me clarify this statement. Tesar (2014) shows that the most

**b**in

*L*

**can be computed quickly (without enumerating all forms in**

_{x}*L*

**) through the prescription in (18) below. Suppose that the learner represents its current ranking information in terms of sets of ERCs, as explained in**

_{x}^{footnote 8}. The reformulated ERI step (17) can be described in terms of ERCs as follows: enrich the current set of ERCs with the additional set of ERCs that are satisfied by all and only the rankings that map the most similar underlying form

**b**to the surface form

**x**. The latter set of ERCs involves a single underlying form

**b**and can therefore be computed efficiently.

^{10} There is an obvious problem looming out of the definition (18): if there are dependencies between the features already set and those still unset, there might exist no form **b** that satisfies this definition. This is a general problem afflicting this approach, which is orthogonal to the issue raised by partial features.

^{11} An anonymous reviewer points out that partial features raise no threat to alternative approaches to the problem of learning underlying forms, such as those by Jarosz (2006) and Riggle (2006). Yet those approaches and Tesar’s (2014) approach have very different goals. Tesar’s theory of output-drivenness is explicitly construed as an attempt at using assumptions on constraints and representations to improve learning speed and algorithmic efficiency. Indeed, the major learnability implication of output-drivenness is a substantial speed-up of one of the subroutines of the inconsistency detection approach, as recalled in section 4. The above-mentioned alternative approaches make no representational assump- tions but ignore issues of algorithmic efficiency. For instance, Riggle explicitly acknowledges that “sorting, updating, and storing a potentially huge number of

## Acknowledgments

The research reported in this article has been supported by a Marie Curie Intra European Fellowship (grant agreement number: PIEF-GA-2011-301938). I would like to thank Bruce Tesar and an anonymous *LI* reviewer for very useful comments.

## References

*Output-driven phonology: Theory and learning*