Abstract

α-integration and α-GMM have been recently proposed for integrated stochastic modeling. However, there has not been an approach to date for estimating model parameters for α-GMM in a statistical way, based on a set of training data. In this letter, parameter updating formulas are mathematically derived based on maximum likelihood criterion using an adapted expectation-maximization algorithm. With this method, model parameters for α-GMM are reestimated in an iterative way. The updating formulas were found to be simple and systematically compatible with the GMM equations. This advantage renders the α-GMM a superset of the GMM but with similar computational complexity. This method has been effectively applied to realistic speaker recognition applications.

1.  Introduction

Gaussian mixture model (GMM) has been well established for decades and is a dominant stochastic modeling technique for a variety of pattern recognition applications. Although conventional GMM has good capacity in stochastic modeling, it often runs into problems when it is applied to robust recognition in adverse conditions. For instance, in speaker recognition, GMM excels at modeling distribution characteristics of data from high-band clean speech but is inferior in modeling distributions from low-band or noisy (convolutional or additive noise) data (Reynolds, 1995; Wu, Morris, & Koreman, 2005; Wu, 2006). This has become a big issue in realistic applications. This is the general background of robust speaker recognition, the topic that this letter addresses. In this letter, we often discuss α-GMM in the context of a speaker recognition application. However, GMM and α-GMM can be applied to other domains as well.

Among the approaches to solving the problem of robust modeling, α-GMM is one of them, and recently it has been proposed to extend conventional GMM into a new framework (Wu, 2008). α-GMM is a more sophisticated model for integrating stochastic modeling components in a nonlinear way, whereas conventional GMM can be regarded as combining its components in a linear way. The procedure of nonlinear combination is referred to as α-integration, which introduces an additional factor α to each component in an integrated stochastic model. With α being set to −1, the α-GMM degenerates into conventional GMM. Therefore, α-GMM is in fact a superset of traditional GMM, and the new framework of α-GMM is a natural extension to canonical GMM. With the value of α being set smaller than −1, the integrated probability density function (pdf) will favor larger component values and deemphasize smaller component values. The integrated pdf therefore possesses a flatter distribution than classical GMM. This feature, referred to as α-warping, is discussed in section 3.

α-GMM has a variety of advantages over conventional GMM, besides being a superset of GMM. α-GMM combines stochastic modeling components with α-integration, while α-integration has been proved optimum in a sense to minimize the extended Kullback-Leibler distance (also referred to as α-divergence; see definition 9) between an integrated stochastic model and its components. Moreover, it was also found that α-integration is very similar to the nonlinear way of combining multiple source channels that occurs in human brains (Amari, 2007). Hence, α-GMM can be considered a more intelligent modeling technique that uses a bio-inspired mechanism. Traditional GMM does not have these advantages.

To the best of our knowledge, no algorithm has been proposed to date to address the issue of estimating model parameters for α-GMM given a data set. Although Amari (2007) mentioned a conceptual idea of gradient descent, it is still far from applying α-GMM to realistic tasks like speaker recognition. This motivates us to propose a training algorithm that can automatically estimate model parameters on a given training set in an iterative way. The proposed algorithm is mathematically derived by solving an optimization problem based on a maximum likelihood criterion with the application of the expectation-maximization (EM) algorithm adaptively (Baum & Sell, 1968; Baum, Petrie, Soules, & Weiss, 1970; Dempster, Laird, & Rubin, 1977). The reestimation equations were found to be simple and compatible with the conventional GMM equations. This property makes α-GMM an ideal method to extend the conventional gaussian mixture model.

Although this letter began with of speaker recognition, the method proposed here is quite general. In fact, the focus of this letter is the mathematical derivation of a theorem to reestimate model parameters for α-GMM with a rigorous proof. The theory is also supported by preliminary speaker recognition experiments.

This letter has two parts. First, it presents a theorem concerning the reestimation formulas for α-GMM. This is the main result. The rest of the letter proves this theorem.

The proof is based on applying the EM algorithm adaptively. In the proof, following the general framework of the EM algorithm, the two steps of expectation (E-step) and maximization (M-step) will be carried out. In the E-step, the expectation of an objective function will be given based on the maximum likelihood criterion; in the M-step, the expectation will be maximized to obtain the reestimation formulae. This is an iterative procedure that eventually converges to a local optimum.

Besides the theoretical proof of the main theorem, we also present experimental results based on a simple but moderately sized realistic application of a robust speaker identity task in order to show the difference between the proposed learning algorithm for α-GMM and conventional GMM. The experiments were carried out on a corpus of telephony speech, NTIMIT (Fisher, Doddington, & Goudie-Marshall, 1986). A moderate number speakers (162) were tested for α-GMM and traditional GMM. A wide range of the values of the factor α were evaluated. It was found that the accuracy with all values of α being set larger than −1 was higher than the baseline. In particular, the accuracy with α = −6 had the largest improvement by 3.8% (with a relative error reduction of 7.8%). This basically confirms that the proposed learning algorithm is valid.

The rest of the letter is organized as follows. In section 2, we denote some elementary notations that will be used throughout this letter. Section 3 presents the basic concepts of α-GMM and clarifies the relationships between α-GMM and conventional GMM. Section 4 presents the main theorem concerning the learning algorithm to α-GMM, followed by a detailed proof. Experimental results for evaluating a speaker identity task on NTIMIT is described in section 5. Section 6 notes some advantages and limitations for the proposed method. Conclusions are drawn in section 7.

2.  Notations

Before delving into detail to present the theory of α-GMM, we first define some basic concepts and notations that will be used throughout this letter:

Definition 1.

x = (x1, x2, …, xd). A d-dimensional vector x represents a random variable, which often stands for a frame in speaker recognition.

Definition 2.

. Denote by a set of a random vector xt. In speaker recognition, we often use it to represent a speech utterance spoken by a certain speaker.

Definition 3.

. A probability density function (pdf) for a random variable .

Definition 4.

. A probability distribution function for a random variable .

Definition 5.
N(xμ, Σ). For a d-dimensional random variable x, a d-dimensional mean vector , and a d × d covariance matrix Σ, denote a multiple-dimensional normal distribution by N(xμ, Σ)—in short, N(x). Such a normal distribution is also referred to as a gaussian distribution.
formula
2.1

Definition 6.
. A warping function (Amari, 2007) for a pdf p(x) with a factor α, which is defined in two cases:
formula
2.2

Definition 7.
f−1α(y). The α-based inverse function of , which is also defined in two cases:
formula
2.3

Definition 8.
α-integration. We call the following equation α-integration (Hardy, Littlewood, & Polya, 1952; Petz & Temesi, 2005), for a set of pdf , i ∈ [1, K], using equation 2.2 and equation 2.3,
formula
2.4
where the weights wi need to satisfy wi ⩾ 0 and ∑Ki = 1wi = 1, and c is a normalization constant that makes the integrated as a pdf:
formula
2.5

Definition 9.
α-divergence. α-divergence (Chernoff, 1952) between two pdf's, p(x) and q(x), is denoted as
formula
2.6

Clearly, α-divergence has four fundamental properties:

  • • 

    Dα(p(x)||q(x)) ⩾ 0

  • • 

    Dα(p(x)||p(x)) = 0

  • • 

    D−1(p(x)||q(x)) = KL(p(x)||q(x))

  • • 

    D1(p(x)||q(x)) = KL(q(x)||p(x))

The KL[•] above is the well-known Kullback-Leibler (KL) divergence.

Bearing in mind these notations, we present the theory of α-GMM.

3.  α-Integrated Gaussian Mixture Model

α-gaussian mixture model (α-GMM) is one sort of gaussian mixture model with the application of α-integration. GMM integrates individual mixtures with an affine combination, ∑iwi = 1, wi > 0, for the weight wi of the ith gaussian mixture. α-GMM then integrates its components with α-integration:

Definition 10.
Given K multiple-dimensional gaussian distributions Ni(x), i ∈ [1, K], and a sequence of weights {wi}, where ∑Ki=1wi = 1, wi > 0, α-GMM for a random variable is denoted as the α-integration of Ni(x):
formula
3.1
where c is a normalization factor with the form of equation 2.5 to make as a pdf.

By substituting equations 2.2 and 2.3 into 3.1, α-GMM can be rewritten in terms of two cases:
formula
3.2
where c is a pdf normalization constant.
The α-GMM includes a set of integrated pdf's; depending on the values of the factor of α. When α = 1, equation 3.2 degenerates to the exponentially integrated model. When α = −1, equation 3.2 becomes the conventional GMM. When α = −3, for instance, the α-GMM turns to the quadratic average of the overall gaussian mixtures, demonstrated as follows for a special case of K = 2 for simplicity:
formula
3.3

The α-GMM can therefore be referred to as a superset of probability density functions. We have GMM∈ α-GMM, and GMM is a special case of α-GMM. This is one of the important properties of α-GMM.

Another important property of α-GMM worth noting is that it is an optimal integration method for all of its components in the sense of minimizing the α-divergence between the integrated function and its components (Amari, 2007). For this, we have theorem 1:

Theorem 3.1
(optimization of α-integration). Let be the α integration of probability density distribution with the weights wi, and be the weighted average of the divergence from to . Then is optimal under the divergence criterion of minimizing
formula
3.4

The proof of theorem 3.1 is in the appendix.

It is also worth noting the role of the parameter α in affecting the integrated modeling capacity. With different values of α, a broad set of integrated functions can be constructed, including conventional GMM. With α being larger than 1, the effect of the term of to each pdf component is like warping large values toward zero and warping small values toward infinity (we refer to this feature as α-warping). Therefore, in this case, α-GMM emphasizes the small values of its pdf components and deemphasizes the large values. When α < 1, the effect is just the opposite: it emphasizes large values and deemphasizes small values. In this case, the integrated pdf is flatter than the pdf of conventional GMM, which employs a linear integration. This process is demonstrated by Figure 1 with the value of α being set to −6. To generate these two graphs, we used two gaussian distributions with equal weights: w1 = w2 = 0.5 for α-GMM and GMM. The mean and variance for the two gaussian distributions are , var1 = [1; 1], , and var2 = [0.5; 0.5].

Figure 1:

Surface plots for α-GMM versus GMM, α = −6.

Figure 1:

Surface plots for α-GMM versus GMM, α = −6.

The freedom introduced by factor α allows a wider range of integrated functions to be selected to address different applications. Robust speaker recognition is addressed in a later section as an example.

However, for the family of α-integration functions, no method has been proposed concerning the estimation of model parameters to the best of our knowledge. Because many applications that use statistical modeling, such as GMM, critically rely on a learning algorithm to reestimate model parameters based on a given training data set, we adopt a similar strategy to deal with the issue of model parameter estimation for α-GMM. This is in fact the main purpose of this letter. In the next section, we present the main theorem to reestimate the parameters of α-GMM and provide the theorem proof.

4.  Parameter Estimation Based on Maximum Likelihood with EM

This section has two parts. First, we present the main theorem to the problem of parameter estimation of α-GMM, based on a given data set. The second part presents the detailed proof.

4.1.  The Main Theorem.

Theorem 4.1
(parameter reestimation of α-GMM). Define , i ∈ [1, K] as the {n − 1}th setting of the parameters of an α-GMM and Θ(n)α = {α, μ(n)i, Σ(n)i, w(n)i} as the n-th setting of the parameters. Let be the l-th gaussian mixture of the α-GMM of the {n − 1}th setting of Θ(n−1). For a given data sample at time t, where t ∈ [1, N], and the {n − 1}-th parameter setting of the α-GMM, denote
formula
4.1
as the posterior probability of the data sample being allocated in the l-th gaussian mixture of the α-GMM of Θ(n−1)α = {α, μ(n−1)i, Σ(n−1)i, w(n−1)i}.
The main theorem is that the n-th parameter setting of α-GMM, , can iteratively be improved for the case of α ≠ 1 based on the criterion of maximum likelihood, according to the following reestimation formulas, until it converges to its optimum point:
formula
4.2
formula
4.3
formula
4.4

4.2.  Proof.

The proof of the main theorem uses the EM algorithm based on the criterion of maximum likelihood estimation (MLE). As is known, EM is composed of a step of expectation (E-step) and a step of maximization (M-step). For the presentation, we use a strategy similar to the one described in Bilmes (1997), but also based on other work on EM algorithms (Baum & Sell, 1968; Baum et al., 1970; Dempster et al., 1977; Jiang, 2007) to describe the E-step and M-step, respectively.

4.2.1.  Objective Function of Maximum Likelihood.

We first present the objective function used for MLE. The model is reestimated based on the criterion of maximum likelihood. The likelihood of a given data set, , for a given model Θ, is denoted as , the product of the likelihood of each data sample , if a data sample is independent and identically distributed:
formula
4.5
We use the logarithmic operation for equation 4.5. The resulting equation is referred to the log likelihood of a data set of , denoted L(X ∣ Θ):
formula
4.6
By taking the definition of α-GMM, equation 3.1, into equation 4.6 according to the two cases of α ≠ 1 and α = 1, we have the log likelihood of the data set :
  1. α ≠ 1:
    formula
  2. α = 1:
    formula
A criterion under which to seek a set of model parameters to maximize the log likelihood for the given data set, , is referred to as maximum likelihood estimation (MLE):
formula
4.9
Clearly, it is not easy to directly optimize due to the summation of gaussian mixtures. However, this difficulty addressed by introducing a hidden variable y and using lemma 1.

Let , be a hidden(or unseen) variable that indicates the gaussian index that a data sample is allocated to and let be an instance of a random variable —that is, , for a given data set , which shows a possible sequence of gaussian index allocated for any data sample .

Let us further define an auxiliary function Q(Θ, Θ(n − 1)), where n − 1 represents the previous iteration, as
formula
4.10
The posterior probability is given by
formula
4.11

Lemma 1.
formula
4.12

Proof.
By definition of the auxiliary function and the condition, we know that:
formula
4.13
According to the Bayesian rule, we have , because p(Θ) = p(n−1)) and p(X) is a constant. Therefore, we have
formula
4.14
By the Bayesian rule again, we obtain
formula
4.15

By applying lemma 1, we can transform the problem of optimization of into a problem of optimization of Q(Θ ∣ Θ(n − 1)).

4.2.2.  E-Step.

Let us consider the case of α ≠ 1. By simple calculus, the auxiliary function Q(Θ ∣ Θ(n − 1)) of the training data can be rewritten as
formula
4.16
By throwing away the constant c, we can simplify equation 4.16 to
formula
4.17
where
formula
4.18
formula
4.19
In this form, Q(Θ, Θ(n − 1)) appears computationally challenging. However, as in Bilmes (1997), it can be simplified if we notice that for l ∈ {1, 2,…, K},
formula
4.20
This is because ∑Kj=1Pr(jxi, Θ(n−1)) = 1. Using equation 4.20, we can rewrite equation 4.17 as
formula
4.21
where
formula
4.22
formula
4.23

4.2.3.  M-Step.

In M-step, we maximize the expectations obtained in the E-step for two cases.

For the case of α ≠ 1, we can optimize equation 4.21 for each of two terms, respectively. That is, for wl, we have
formula
4.24
where λ is a Lagrange multiplier.
Therefore, we have the following equation:
formula
4.25
Summing both sides of equation 4.25 over l, we obtain
formula
4.26
Therefore, we have
formula
4.27
which yields
formula
4.28
For Θl,
formula
4.29
Taking equation 4.29 into Ψ(Θl), we can obtain
formula
4.30
If we ignore the constant terms (since they disappear after taking derivatives), we get
formula
4.31
Therefore, taking the derivative of equation 4.31 with respect to μl and setting it equal to zero, we get
formula
4.32
which, solving for μl, yields
formula
4.33
And similarly, as in Bilmes (1997), we also get
formula
4.34

5.  Experiments

In this section, we present experiments on robust speaker recognition as an example to demonstrate the performance difference of the proposed training algorithm for α-GMM from GMM.

Speaker recognition is one of the tasks based on pattern recognition techniques, mainly on statistical modeling, to recognize a speaker's identity by voice characteristics. There are two types of applications in speaker recognition: speaker identification (SI) and speaker verification (SV). An SI task recognizes a speaker identity from a given set of speakers enrolled in the system, and an SV task verifies a speaker's identity by answering a binary question with a yes or no. In our experiments, we selected the SI task to show the effectiveness of the application of α-GMM without losing the generality on its application to other pattern recognition tasks such as SV.

In our experiments, as in Wu et al. (2005), Mel frequency cepstral coefficient features, obtained using the hidden Markov model toolkit (Young et al., 2002), were used, with 20 ms windows and 10 ms shift, a preemphasis factor of 0.97, a Hamming window, and 20 Mel scaled feature bands. All 20 MFCC coefficients were used except c0. On this database, silence removal, cepstral mean subtraction, and time difference features did not increase performance, so these were not used.

We trained a GMM with 32 gaussians for each speaker in a given set (162 speakers) as a baseline, a moderately sized task. We correspondingly trained an α-GMM for each speaker for comparison. The training and test data were based on the NTIMIT database, a telephony corpus (Campbell & Reynolds, 1999). Six utterances were used as training materials for GMM and α-GMM, and two other utterances were used for testing. The recognition criterion is to select the largest score from a given model group, that is, select the most probable speaker model as the target speaker identity. In our configuration, we evaluated the performance of α-GMM by assigning different values to the parameter of α so as to select the different integration functions. The values of α were chosen correspondingly from the range of [−12, 0.5]; the range higher than this set was not tested.

The results are shown in Figure 2. We can see from this figure that when α ∈ [−12, −1], all the integrated functions were experimentally better than the linear integration of the conventional GMM on the telephony corpus. The values of α are higher than −1 degraded integration performance for telephony speech. The best performance was attained with α = −6. (Due to the specific purpose of this letter, we shall not present more experiments.) The simple experiment presented here was intended only to show the difference of the proposed training algorithms between α-GMM and conventional GMM. (For experimental results for α-GMM applied to robust speaker recognition, see Wu, 2008.)

Figure 2:

α-GMM versus conventional GMM on the NTIMIT database.

Figure 2:

α-GMM versus conventional GMM on the NTIMIT database.

6.  Discussion

Here we discuss some issues that have not been covered already.

First, the reestimation formulas given in theorem 1 are applied to the case of α ≠ 1. For α = 1, the exponential integration, theorem 1 is not applicable because wi and are not separable in equation 4.21 (simple calculus can show this). So in this case, it might be that EM learning cannot be used. How to derive a reestimation formula for this case is a topic for future work. Nevertheless, this does not violate the most important point in this letter that the proposed reestimation formulas are applicable to most of the cases of the values of α, therefore satisfying the requirements of most applications, such as SI and SV. This is a key point.

The second point is a convergence issue. The proposed algorithm is one of recursive training approaches. The model parameters are attained in an iterative way. At the beginning of the training stage, initial values have to be set for the parameters in a given α-GMM. Generally there are two methods available for initialization of model parameters: K-means clustering and mixture splitting. The K-means method sets up initial values directly from an assigned number for clusters, whereas the mixture splitting method splits clusters from a relatively low number to a higher one recursively. Both are often used in realistic applications. In our method, we adopted the K-means clustering algorithm for initialization, as does the baseline GMM, which achieved pretty good performance (Wu, 2006; Reynolds, 1995). After initialization, at each iterative training step, the model parameters are updated to better values with the proposed method, that is, according to the formulas given in equations 4.28, 4.33, and 4.34. These procedures continue until the training process converges to an optimum point in a solution region. This point is reflected by lemma 1.

However, the algorithm does not necessarily guarantee attaining a global optimum solution. Because the proposed method is adapted from the EM algorithm, it holds the similar properties as the conventional EM algorithm. The EM algorithm cannot guarantee finding a global optimum point in a solution region given a data set. It is likely that the algorithm stops at a certain local optimum point. Therefore, a better initialization is extremely important for the effectiveness of the models trained using the EM algorithm. This point is also valid for the method proposed for α-GMM. According to conventional GMM training, the K-means method was found to be one of the effective methods to facilitate EM training for GMM. Considering the updating formulas of α-GMM in compliance to those of the conventional GMM, except introducing the factor of α to the posterior probabilities (see equation 4.20), we still use the K-means algorithm as a clustering algorithm for initializing model parameters. However, more advanced clustering algorithms, such as mixture splitting, are worth investigating.

Third, we emphasize the role of the factor of α in the function of integration. From theorem 1, we see that the reestimation formulas are mathematically simple, although the deriving procedure is somewhat complex. Compared with the updating formulas for conventional GMM training, equations 4.28, 4.33, and 4.34 for the α-GMM look very similar to the corresponding ones in GMM except the parts in equation 4.1. The essential difference in equation 4.1 is that an α factor is given to each component in a gaussian mixture so as to warp their contributions to the final probability score of the composite model. By choosing different values of α, the integrated model can emphasize either small values or large values of the component score, while it is more likely to suppress the effect of noisy components from data by deemphasizing small values. This is indeed the essence of α-GMM, which we refer to as α-warping at score levels. Furthermore, the effect of α-warping is reflected not only from each component score, but also from its final score of the sum of its component ones. This step is like a further normalization.

The next noteworthy point is the comparison between GMM and α-GMM in terms of complexity. As similarity fully exists in the formality of parameter estimation equations between α-GMM and conventional GMM, α-GMM has a variety of advantages over conventional GMM. First, α-GMM is a superset of conventional GMM. GMM is a special case of α-GMM with α = −1. Therefore, α-GMM has better modeling capacity compared with conventional GMM. Second, computation complexity for α-GMM is similar to that for GMM. It is easy to see from equation 4.1 that the only increased cost for calculating probability scores involves the calculation of the powers of the factor α with mixture scores. Considering that logarithmic probability is normally used instead of probability, the computation cost for the α-GMM is raised only by M + 1 multiplications, where M is the number of mixtures in α-GMM. Therefore, the computational costs between α-GMM and traditional GMM are comparable at the same complexity level. Considering all the above reasons, α-GMM can therefore be viewed as a more powerful modeling tool than conventional GMM.

The fifth point to comment on is an issue concerning selecting the values of α. The algorithm in this letter proposed the reestimation formulas based on a fixed α value, that is, in the overall procedure of optimization, the value of α is assumed constant. However, a more sophisticated question is whether it is possible to optimize the parameter of α as well. This is in fact a problem in selecting the optimal integration method to address a specific application scenario. This could be an extension work in the near future.

Finally, we give another possible extension—this one on the criterion used for optimization. The current criterion is to use maximum likelihood as an objective function for parameter optimization. Many other criteria can also possibly be employed. Among these, maximum a posteriori (MAP), maximum mutual information (MMI), and other discriminant training methods are likely to be useful. Future work will investigate these ideas for training α-GMM.

7.  Conclusion

This letter presented a theorem concerning parameter reestimation for α-GMM. In the proof of this theorem, the expectation-maximization algorithm was applied to solve an objective function based on maximizing the likelihood of a given data set. The overall procedure of the proof was given by two separate steps: the E-step and M-step. The resultant formulas to reestimate model parameters for α were found to be simple and compatible with those of GMM. This advantage makes the α-GMM possess the same level of computational complexity in both training and test stages. In addition, experiments on a moderately sized speaker recognition task confirmed the effectiveness of the learning algorithm for α-GMM.

Appendix:  Proof of Theorem 3.1

This proof of theorem 3.1 follows the method proposed in Amari (2007). The essential idea of the proof is to employ an optimization method by differentiating the objective function, equation 3.4, with respect to the integrated function .

Proof.
This is a constrained optimization problem with the constraint of . First prove the case of α ≠ ± 1. For this, we differentiate equation 3.4 with respect to :
formula
A.1
where λ is a Lagrange multiplier. We then deduce the equation:
formula
A.2
Simplifying equation A.2 by simple calculus, we get the optimum :
formula
A.3
When α + ± 1, we have
formula
A.4
formula
A.5
respectively.

Hence, the optimum is the α-integration of any α.

Acknowledgments

I sincerely thank the anonymous reviewers who made important comments on this manuscript and substantially improved its quality.

References

Amari
,
S.
(
2007
).
Integration of stochastic models by minimizing α-divergence
.
Neural Comp.
,
19
,
2780
2796
.
Baum
,
L. E.
,
Petrie
,
T.
,
Soules
,
G.
, &
Weiss
,
N.
(
1970
).
A maximisation technique occurring in the statistical analysis of probabilistic functions of Markov chains
.
Annals of Mathematical Statistics
,
41
,
164
171
.
Baum
,
L. E.
, &
Sell
,
G. R.
(
1968
).
Growth transformations for functions on manifolds
.
Pacific Journal of Mathematics
,
27
,
211
227
.
Bilmes
,
J. A.
(
1997
).
A gentle tutorial of the EM algorithm and its application to parameter estimation for gaussian mixture and hidden Markov models
(
Tech. Rep. tr-97-021
)
Berkeley
:
University of California
.
Campbell
,
J. P.
, &
Reynolds
,
D. A.
(
1999
).
Corpora for the evaluation of speaker recognition systems
. In
Proceedings of International Conference on Acoustics, Speech, and Signal Processing
(
Vol. 2
, pp.
829
832
).
Washington, DC
:
IEEE Computer Society
.
Chernoff
,
H.
(
1952
).
A measure of asymptotic efficiency for tests of a hypothesis based on a sum of observations
.
Annals of Mathematical Statistics
,
23
,
493
507
.
Dempster
,
A. P.
,
Laird
,
N. M.
, &
Rubin
,
D. B.
(
1977
).
Maximum likelihood from incomplete data via the EM algorithm (with discussion)
.
Journal of the Royal Statistical Society B
,
39
,
1
38
.
Fisher
,
W. M.
,
Doddington
,
G. R.
, &
Goudie-Marshall
,
K. M.
(
1986
).
The DARPA speech recognition research database: Specifications and status
.
Proceedings of DARPA Workshop on Speech Recognition
(pp.
93
99
).
Orlando, FL
:
Academic Press
.
Hardy
,
G. H.
,
Littlewood
,
J. E.
, &
Polya
,
G.
(
1952
).
Inequalities
(2nd ed.).
Cambridge
:
Cambridge University Press
.
Jiang
,
H.
(
2007
).
A general formulation for discriminative learning of generative graphical models
(
Tech. Rep.
).
York
:
Department of Computer Science and Engineering, York University
.
Petz
,
D.
, &
Temesi
,
R.
(
2005
).
Means of positive numbers and matrices
.
SIAM Journal on Matrix Analysis and Applications
,
27
,
712
720
.
Reynolds
,
D. A.
(
1995
).
Large population speaker identification using clean and telephone Speech
.
IEEE Signal Processing Letters
,
2
,
46
48
.
Wu
,
D.
(
2006
).
Discriminative preprocessing of speech: Towards improving biometric authentication
.
Unpublished doctoral dissertation, Saarland University, Saarbruecken, Germany
.
Wu
,
D.
(
2008
).
α-gaussian mixture modelling for speaker recognition
.
Manuscript submitted for publication. Available online at www.cse.yorku.ca/~daleiwu/alphaGMMPRL.pdf
.
Wu
,
D.
,
Morris
,
A.
, &
Koreman
,
J.
(
2005
).
MLP internal representation as discriminative features for improved speaker recognition
(pp.
72
78
). In
Nonlinear Analyses and Algorithms for Speech Processing Part II
.
Berlin
:
Springer
.
Young
,
S.
,
Evermann
,
G.
,
Hain
,
T.
,
Kershaw
,
D.
, &
Moore
,
G.
(
2002
).
The HTK book
.
Cambridge
:
Cambridge University Press
.