We consider the problem of extracting a common structure from multiple tensor data sets. For this purpose, we propose multilinear common component analysis (MCCA) based on Kronecker products of mode-wise covariance matrices. MCCA constructs a common basis represented by linear combinations of the original variables that lose little information of the multiple tensor data sets. We also develop an estimation algorithm for MCCA that guarantees mode-wise global convergence. Numerical studies are conducted to show the effectiveness of MCCA.

Various statistical methodologies for extracting useful information from a large amount of data have been studied over the decades since the appearance of big data. In the present era, it is important to discover a common structure of multiple data sets. In an early study, Flury (1984) focused on the structure of the covariance matrices of multiple data sets and discussed the heterogeneity of the structure. The author reported that population covariance matrices differ among multiple data sets in practical applications. Many methodologies have been developed for treating the heterogeneity between covariance matrices of multiple data sets (see, Flury, 1986, 1988; Flury & Gautschi, 1986; Pourahmadi, Daniels, & Park, 2007; Wang, Banerjee, & Boley, 2011; Park & Konishi, 2020).

Among such methodologies, common component analysis (CCA; Wang et al., 2011) is an effective tool for statistics. The central idea of CCA is to reduce the number of dimensions of data while losing as little information of the multiple data sets as possible. To reduce the number of dimensions, CCA reconstructs the data with a few new variables that are linear combinations of the original variables. For considering the heterogeneity between covariance matrices of multiple data sets, CCA assumes that there is a different covariance matrix for each data set. There have been many papers on various statistical methodologies using multiple covariance matrices: discriminant analysis (Bensmail & Celeux, 1996), spectral decomposition (Boik, 2002), and a likelihood ratio test for multiple covariance matrices (Manly & Rayner, 1987). It should be noted that principal component analysis (PCA) (Pearson, 1901; Jolliffe, 2002) is a technique similar to CCA. In fact, CCA is a generalization of PCA; PCA can only be applied to one data set, whereas CCA can be applied to multiple data sets.

Meanwhile, in various fields of research, including machine learning and computer vision, the main interest has been in tensor data, which has a multidimensional array structure. In order to apply the conventional statistical methodologies, such as PCA, to tensor data, a simple approach is to first transform the tensor data into vector data and then apply the methodology. However, such an approach causes the following problems:

  1. In losing the tensor structure of the data, the approach ignores the higher-order inherent relationships of the original tensor data.

  2. Transforming tensor data to vector data substantially increases the number of features. It also has a high computational cost.

To overcome these problems, statistical methodologies for tensor data analyses have been proposed that take the tensor structure of the data into consideration. Such methods enable us to accurately extract higher-order inherent relationships in a tensor data set. In particular, many existing statistical methodologies have been extended for tensor data, for example, multilinear principal component analysis (MPCA) (Lu et al., 2008) and sparse PCA for tensor data analysis (Allen, 2012; Wang, Sun, Chen, Pang, & Zhou, 2012; Lai, Xu, Chen, Yang, & Zhang, 2014), as well as others (see Carroll & Chang, 1970; Harshman, 1970; Kiers, 2000; Badeau & Boyer, 2008; Kolda & Bader, 2009).

In this letter, we extend CCA to tensor data analysis, proposing multilinear common component analysis (MCCA). MCCA discovers the common structure of multiple data sets of tensor data while losing as little of the information of the data sets as possible. To identify the common structure, we estimate a common basis constructed as linear combinations of the original variables. For estimating the common basis, we develop a new estimation algorithm based on the idea of CCA. In developing the estimation algorithm, two issues must be addressed: the convergence properties of the algorithm and its computational cost. To determine the convergence properties, we investigate first the relationship between the initial values of the parameters and global optimal solution and then the monotonic convergence of the estimation algorithm. These analyses reveal that our proposed algorithm guarantees convergence of the mode-wise global optimal solution under some conditions. To analyze the computational efficacy, we calculate the computational cost of our proposed algorithm.

The rest of the letter is organized as follows. In section 2, we review the formulation and the minimization problem of CCA. In section 3, we formulate the MCCA model by constructing the covariance matrices of tensor data, based on a Kronecker product representation. Then we formulate the estimation algorithm for MCCA in section 4. In section 5, we present the theoretical properties for our proposed algorithm and analyze the computational cost. The efficacy of the MCCA is demonstrated through numerical experiments in section 6. Concluding remarks are presented in section 7. Technical proofs are provided in the appendixes. Our implementation of MCCA and supplementary materials are available at https://github.com/yoshikawa-kohei/MCCA.

Suppose that we obtain data matrices X(g)=[x(g)1,x(g)Ng]RNg×P with Ng observations and P variables for g=1,,G, where x(g)i is the P-dimensional vector corresponding to the ith row of X(g) and G is the number of data sets. Then the sample covariance matrix in group g is
(2.1)
where S(g)S+P, in which S+P is a set of symmetric positive-definite matrices of size P×P, and x¯(g)=1Ngi=1Ngx(g)i is a P-dimensional mean vector in group g.
The main idea of the CCA model is to find the common structure of multiple data sets by projecting the data onto a common lower-dimensional space with the same basis as the data sets. Wang et al. (2011) assumed that the covariance matrices S(g) for g=1,,G can be decomposed to a product of latent covariance matrices and an orthogonal matrix for the linear transformation as
(2.2)
where Λ(g)S+R is the latent covariance matrix in group g, VRP×R is an orthogonal matrix for the linear transformation, E(g)S+P is the error matrix in group g, and IR is the identity matrix of size R×R. E(g) consists of the sum of outer products for independent random vectors i=1Nge(g)ie(g)i with mean Ee(g)i=0 and covariance matrix Cove(g)i(>O)(i=1,2,,Ng). V determines the R-dimensional common subspace of the multiple data sets. In particular, by assuming R<P, the CCA can discover the latent structures of the data sets. Wang et al. (2011) referred to the model, equation 2.2, as common component analysis (CCA).
The parameters V and Λ(g)(g=1,,G) are estimated by solving the minimization problem,
(2.3)
where ·F denotes the Frobenius norm. The estimator of latent covariance matrices Λ(g) for g=1,,G can be obtained by solving the minimization problem as Λ^(g)=VS(g)V. By using the estimated value Λ^(g), the minimization problem can be reformulated as the following maximization problem:
(2.4)

where tr(·) denotes the trace of a matrix. A crucial issue for solving the maximization problem 2.4 is the nonconvexity. Certainly the maximization problem is nonconvex since the problem is defined on a set of orthogonal matrices, which is a nonconvex set. Generally it is difficult to find the global optimal solution in nonconvex optimization problems. To overcome this drawback, Wang et al. (2011) proposed an estimation algorithm in which the estimated parameters are guaranteed to constitute the global optimal solution under some conditions.

In this section, we introduce a mathematical formulation of the MCCA, an extension of the CCA in terms of tensor data analysis. Moreover, we formulate an optimization problem of MCCA and investigate its convergence properties.

Suppose that we independently obtain Mth order tensor data X(g)iRP1×P2××PM for i=1,Ng. We set the data sets of the tensors X(g)=[X(g)1,X(g)2,,X(g)Ng]RP1×P2××PM×Ng for g=1,,G, where G is the number of data sets. Then the sample covariance matrix in group g for the tensor data set is defined by
(3.1)
where S(g)*S+P, in which P=k=1MPk, denotes the Kronecker product operator, and S(g)(k)S+Pk is the sample covariance matrix for kth mode in group g defined by
(3.2)
Here, X(g)i(k)RPk×(jkPj) is the mode-k unfolded matrix of X(g)i, and X¯(g)(k)RPk×(jkPj) is the mode-k unfolded matrix of X¯(g)=1Ngi=1NgX(g)i. Note that the mode-k unfolding from an Mth order tensor XRP1×P2××PM to a matrix X(k)RPk×(jkPj) means that the tensor element (p1,p2,,pM) maps to matrix element (pk,l), where l=1+t=1,tkM(pt-1)Lt with Lt=m=1,mkt-1Pm, in which p1,p2,,pM denote the indices of the Mth order tensor X. For a more detailed description of tensor operations, see Kolda and Bader (2009). A representation of the tensor covariance matrix by Kronecker products is often used (Kermoal, Schumacher, Pedersen, Mogensen, & Frederiksen, 2002; Yu et al., 2004; Werner, Jansson, & Stoica, 2008).
To formulate CCA in terms of tensor data analysis, we consider CCA for the kth mode covariance matrix in group g as follows,
(3.3)
where Λ(g)(k)S+Rk is the latent kth mode covariance matrix in group g, V(k)RPk×Rk is an orthogonal matrix for the linear transformation, and E(g)(k)S+Pk is the error matrix in group g. E(g)(k) consists of the sum of outer products for independent random vectors i=1Nge(g)i(k)e(g)i(k) with mean Ee(g)i(k)=0 and covariance matrix Cove(g)i(k)(>O)(i=1,2,,Ng). Since S(g)* can be decomposed to a Kronecker product of S(g)(k) for k=1,,M in formula 3.1, we obtain the following model,
(3.4)
where R=k=1MRk, V*=V(1)V(2)V(M), Λ(g)*=Λ(g)(1)Λ(g)(2)Λ(g)(M), and E(g)* is the error matrix in group g. We refer to this model as multilinear common component analysis (MCCA).
To find the R-dimensional common subspace between the multiple tensor data sets, MCCA determines V(1),V(2),,V(M). As with CCA, we obtain the estimate of Λ(g)* for g=1,,G as Λ^(g)*=V*S(g)*V*. With respect to V*, we can obtain the estimate by solving the following maximization problem, which is similar to equation 2.4:
(3.5)
However, the number of parameters will be very large when we try to solve this problem directly, and thus results in a high computational cost. Moreover, it may not be possible to discover the inherent relationships among the variables in each mode simply by solving problem 3.5.

To solve the maximization problem efficiently and identify the inherent relationships, the maximization problem 3.5 can be decomposed into the mode-wise maximization problems represented in the following lemma.

Lemma 1.
An estimate of the parameters V(k) for k=1,2,,M in the maximization problem 3.5 can be obtained by solving the following maximization problem for each mode:
(3.6)
However, we cannot simultaneously solve this problem for V(k),k=1,2,,M. Thus, by summarizing the terms unrelated to V(k) in maximization problem 3.6, we can obtain the maximization problem for kth mode,
(3.7)
where M(V(k))=g=1Gw(g)(-k)S(g)(k)V(k)V(k)S(g)(k), in which w(g)(-k) is given by
(3.8)
Although an estimate of V(k) can be obtained by solving maximization problem 3.7, this problem is nonconvex, since V(k) is assumed to be an orthogonal matrix. Thus, the maximization problem has several local maxima. However, by choosing the initial values of parameters in the estimation near the global optimal solution, we can obtain the global optimal solution. In section 4, we develop not only an estimation algorithm but also an initialization method for choosing the initial values of the parameters near the global optimal solution. The initialization method helps guarantee the convergence of our algorithm to the mode-wise global optimal solution.

Our estimation algorithm consists of two steps: initializing the parameters and iteratively updating the parameters. The initialization step gives us the initial values of the parameters near the global optimal solution for each mode. Next, by iteratively updating the parameters, we can monotonically increase the value of the objective function 3.7 until convergence.

4.1  Initialization

The first step is to initialize the parameters V(k) for each mode. We define an objective function fk'(V(k))=trV(k)MI(k)V(k) for k=1,,M, where MI(k)=g=1Gw(g)(-k)S(g)(k)S(g)(k). Next, we adopt a maximizer of fk'(V(k)) as initial values of the parameters V(k). To obtain the maximizer, we need an initial value of w(k)=w(1)(-k),w(2)(-k),,w(G)(-k). The initial value for w(k) is obtained by solving the quadratic programming problem,
(4.1)
where
(4.2)
in which λ(g)i(j) is the ith largest eigenvalue of S(g)(j)S(g)(j).

Using the initial value of w(k), we can obtain the initial value of the parameter V0(k) by maximizing fk'(V(k)) for each mode. The maximizer consists of Rk eigenvectors, corresponding to the Rk largest eigenvalues, obtained by eigenvalue decomposition of MI(k). The theoretical justification for this initialization is discussed in section 5.

4.2  Iterative Update of Parameters

The second step is to update parameters V(k) for each mode. We update parameters such that the objective function fk(V(k)) is maximized. Let Vs(k) be the value of V(k) at step s. Then we solve the surrogate maximization problem,
(4.3)
The solution of equation 4.3 consists of Rk eigenvectors, corresponding to the Rk largest eigenvalues, obtained by eigenvalue decomposition of M(Vs(k)). By iteratively updating the parameters, the objective function fk(V(k)) is monotonically increased, which allows it to be maximized. The monotonically increasing property is discussed in section 5.

Our estimation procedure comprises the above estimation steps. The procedure is summarized as algorithm 1.

graphic

This section presents the theoretical and computational analyses for algorithm 1. Theoretical analyses consist of two steps. First, we prove that the initial values of parameters obtained in section 4.1 are relatively close to the global optimal solution. If the initial values are close to the global maximum, then we can obtain the global optimal solution even if the maximization problem is nonconvex. Second, we prove that the iterative updates of the parameters in section 4.2 monotonically increase the value of objective function 3.7 by solving the surrogate problem 4.3. From the monotonically increasing property, the estimated parameters always converge at a stationary point. The combination of these two results enables us to obtain the mode-wise global optimal solution. In the computational analysis, we calculate computational cost for MCCA and then compare the cost with conventional methods. By comparing the costs, we investigate the computational efficacy of MCCA.

5.1  Analysis of Upper and Lower Bounds

This section aims to provide the upper and lower bounds of the maximization problem 3.7. From the bounds, we find that the initial values in section 4.1 are relatively close to the global optimal solution. Before providing the bounds, we define a contraction ratio.

Definition 1.
Let fk'max be the global maximum of fk'(V(k)) and M(k)=trMI(k). Then a contraction ratio of data for kth mode is defined by
(5.1)

Note that a contraction ratio α(k) satisfies 0α(k)1 and α(k)=1 if and only if Rk=Pk.

Using fk'max and the contraction ratio α(k), we have the following theorem that reveals the upper and lower bounds of the global maximum in problem 3.7.

Theorem 1.
Let fkmax be the global maximum of fk(V(k)). Then
(5.2)
where α(k) is the contraction ratio defined in equation 5.1 and fk'max is the global maximum of fk'(V(k)).

This theorem indicates that fk'maxfkmax when α(k)1. Thus, it is important to obtain an α(k) that is as close as possible to one. Since α(k) depends on V0(k) and w(k), V0(k) depends on w(k). From this dependency, if we could set the initial value of w(k) such that α(k) is as large as possible, then we could obtain an initial value of V0(k) that attains a value near fkmax. The following theorem shows that we can compute the initial value of w(k) such that α(k) is maximized.

Theorem 2.

Let λ0(k) and λ1(k) be the vectors consisting of eigenvalues defined in equation 4.2. For w(k)=w(1)(-k),w(2)(-k),,w(G)(-k)(k=1,2,,M), suppose that the estimate w^(k) is obtained by solving equation 4.1 for k=1,2,,M. Then w^(k) maximizes α(k).

In fact, α(k) is very close to one with the initial values given in theorem 2 even if Rk is small. This resembles the cumulative contribution ratio in PCA.

5.2  Convergence Analysis

We next verify that our proposed procedure for iteratively updating parameters maximizes the optimization problem 3.7. In algorithm 1, the parameter Vs+1(k) can be obtained by solving the surrogate maximization problem 4.3. Theorem 3 shows that we can monotonically increase the value of the function fk(V(k)) in equation 3.7 by algorithm 1.

Theorem 3.
Let Vs+1(k) be Rk eigenvectors, corresponding to the Rk largest eigenvalues, obtained by eigenvalue decomposition of M(Vs(k)). Then
(5.3)

From theorem 1, we obtain initial values of the parameters that are near the global optimal solution. By combining theorems 1 and 3, the solution from algorithm 1 can be characterized by the following corollary.

Corollary 1.

Consider the maximization problem 3.7. Suppose that the initial value of the parameter is obtained by V0(k) = argmaxV(k)fk˜'(V(k)), and the parameter Vs(k) is repeatedly updated by algorithm 1. Then the mode-wise global maximum for the maximization problem 3.7 is achieved when all the contraction ratios α(k) for k=1,2,,M go to one.

Algorithm 1 does not guarantee the global solution due to the fundamental problem of nonconvexity, but it is enough for pragmatic purposes. We investigate the issue of convergence to global solution through numerical studies in section 6.3.

5.3  Computational Analysis

First, we analyze the computational cost. To simplify the analysis, we assume P=argmaxjPj for j=1,2,,M. This implies that P is the upper bound of Rj for all j. We then calculate the upper bound of the computational complexity.

The expensive computations of the each iteration in algorithm 1 consist of three parts: the formulation of M(Vs(k)), the eigenvalue decomposition of M(Vs(k)), and updating latent covariance matrices Λg(k). These steps are O(GM2P3), O(P3), and O(GMP3), respectively. The total computational complexity per iteration is then O(GM2P3).

Next, we analyze the memory requirement of algorithm 1. MCCA represents the original tensor data with fewer parameters by projecting the data onto a lower-dimensional space. This requires the Pk×Rk projection matrices V(k) for k=1,2,,M. MCCA projects the data with size of Nk=1MPk to Nk=1MRk, where N=g=1GNg. Thus, the required size for the parameters is k=1MPkRk+Nk=1MRk. MPCA requires the same amount of memory as MCCA. Meanwhile, CCA and PCA need a projection matrix, which is size Rk=1MPk. The required size for the parameters is then Rk=1MPk+NR.

To compare the computational cost clearly, the upper bounds of computational complexity and the memory requirement are summarized in Table 1. Table 1 shows that the computational complexity of MCCA is superior to that of the other algorithms and the complexity of MCCA is not limited by sample size. In contrast, the MPCA algorithm is affected by the sample size (Lu, Plataniotis, & Venetsanopoulos, 2008). Additionally, MCCA and MPCA require a large amount of memory when the number of modes in a data set is large, but their memory requirements are much smaller than those of PCA and CCA.

Table 1:

Comparisons of the Computational Complexity and the Memory Requirement.

MethodComputational ComplexityMemory Reqirement
PCA O(P3M) Rk=1MPk+NR 
CCA O(GP3M) Rk=1MPk+NR 
MPCA O(NMPM+1) k=1MPkRk+Nk=1MRk 
MCCA O(GM2P3) k=1MPkRk+Nk=1MRk 
MethodComputational ComplexityMemory Reqirement
PCA O(P3M) Rk=1MPk+NR 
CCA O(GP3M) Rk=1MPk+NR 
MPCA O(NMPM+1) k=1MPkRk+Nk=1MRk 
MCCA O(GM2P3) k=1MPkRk+Nk=1MRk 

To demonstrate the efficacy of MCCA, we applied MCCA, PCA, CCA, and MPCA to image compression tasks.

6.1  Experimental Setting

For the experiments, we prepared the following three image data sets:

  • MNIST data set consists of data of handwritten digits 0,1,,9 at image sizes of 28×28 pixels. The data set includes a training data set of 60,000 images and a test data set of 10,000 images. We used the first 10 training images of the data set for each group. The MNIST data set (Lecun, Bottou, Bengio, & Haffner, 1998) is available at http://yann.lecun.com/exdb/mnist/.

  • AT&T (ORL) face data set contains gray-scale facial images of 40 people. The data set has 10 images sized 92×112 pixels for each person. We used images resized by a factor of 0.5 to improve the efficiency of the experiment. The AT&T face data set is available at https://git-disl.github.io/GTDLBench/datasets/att_face_dataset/. All the credits of this data set go to AT&T Laboratories Cambridge.

  • Cropped AR database has color facial images of 100 people. These images are cropped around the face. The size of images is 120×165×3 pixels. The data set contains 26 images in each group, 12 of which are images of people wearing sunglasses or scarves. We used the cropped facial images of 50 males who were not wearing sunglasses or scarves. Due to memory limitations, we resized these images by a factor of 0.25. The AR database (Martinez & Benavente, 1998; Martinez & Kak, 2001) is available at http://www2.ece.ohio-state.edu/∼aleix/ARdatabase.html.

The data set characteristics are summarized in Table 2.

Table 2:

Summary of the Data Sets.

Data SetGroup SizeSample Size (/Group)Number of DimensionsNumber of Groups
MNIST Small 10 28×28=784 10 
AT&T(ORL) Small 10 46×56=2576 10 
 Medium   20 
 Large   40 
Cropped AR Small 14 30×41×3=7380 10 
 Medium   25 
 Large   50 
Data SetGroup SizeSample Size (/Group)Number of DimensionsNumber of Groups
MNIST Small 10 28×28=784 10 
AT&T(ORL) Small 10 46×56=2576 10 
 Medium   20 
 Large   40 
Cropped AR Small 14 30×41×3=7380 10 
 Medium   25 
 Large   50 

To compress these images, we performed dimensionality reductions by MCCA, PCA, CCA, and MPCA, as follows. We vectorized the tensor data set before performing PCA and CCA. In MCCA, the images were compressed and reconstructed according to the following steps:

  1. Prepare the multiple image data sets X(g)RP1×P2××PM×Ng for g=1,2,,G.

  2. Compute the covariance matrix of X(g) for g=1,2,,G.

  3. From these covariance matrices, compute the linear transformation matrices ViRPi×Ri for i=1,2,,M for mapping to the (R1,R2,,RM)-dimensional latent space.

  4. Map the ith sample X(g)i to X(g)i×1V1×2V2×MVMRR1×R2××RM, where the operator ×i is the i-mode product of tensor (Kolda & Bader, 2009).

  5. Reconstruct ith sample X˜(g)i = X(g)i×1V1V1×2V2V2×MVMVM.

Meanwhile, PCA and MPCA each require a single data set. Thus, we aggregated the data sets as X=[X(1),X(2),,X(G)]RP1×P2××PM×g=1GNg and performed PCA and MPCA for data set X.

6.2  Performance Assessment

For MCCA and MPCA, the reduced dimensions R1 and R2 were chosen as the same number, and then we fixed R3 as two. All computations were performed by the software R (version 3.6) (R Core Team, 2019). In the initialization of MCCA, solving the quadratic programming problem was carried out using the function ipop in the package kernlab. MPCA was implemented as the function mpca in the package rTensor. (The implementations of MCCA, PCA, and CCA are available at https://github.com/yoshikawa-kohei/MCCA.)

To assess their performances, we calculated the reconstruction error rate (RER) under the same compression ratio (CR). RER is defined by
(6.1)
where X˜=[X˜(1),X˜(2),,X˜(G)] is the aggregated data set of reconstructed tensors X˜(g)=[X˜(g)1,X˜(g)2,,X˜(g)Ng] for g=1,2,,G and XF is the norm of a tensor XRP1×P2××PM computed by
(6.2)
in which xp1,p2,,pM is an element (p1,p2,,pM) of X. In addition, we defined CR as
(6.3)
The number of required parameters for MCCA and MPCA is k=1MPkRk+Nk=1MRk, whereas that for CCA and PCA is Rk=1MPk+NR.
Figure 1 plots the RER obtained by estimating various reduced dimensions for the AT&T(ORL) data set with group sizes of small, medium, and large. As the figures for the results of the other data sets were similar to Figure 1, we show them in the supplementary materials S1.
Figure 1:

Plots of RER versus CR for the AT&T(ORL) data set of various group sizes: (a) small, (b) medium, and (c) large.

Figure 1:

Plots of RER versus CR for the AT&T(ORL) data set of various group sizes: (a) small, (b) medium, and (c) large.

Close modal

From Figure 1, we observe that the RER material MCCA is the smallest for any value of CR. This indicates that MCCA performs better than the other methods. In addition, note that CCA performs better than MPCA only for fairly small values of CR, even though it is a method for vector data, whereas MPCA performs better for larger values of CR. This implies the limitations of CCA for vector data.

Next we consider group size by comparing panels a, b, and c in Figure 1. The value of CR at the intersection of CCA and MPCA increases with increasing the group size. This indicates that MPCA has more trouble extracting an appropriate latent space as the group size increases. Since MPCA does not consider the group structure, it is not possible to properly estimate the covariance structure when the group size is large.

Figure 2 shows the comparison of runtime for the AT&T(ORL) data set with group sizes of small, medium, and large. Although Table 1 gives the superiority of the computational complexity for MCCA, Figure 2 shows that MCCA is slower than MPCA for any size of data set. This probably arises from the difference of implementation of MCCA and MPCA: MCCA is implemented by our hand-built source code, while MPCA is done by the package rTensor. But when we compare MCCA with CCA, MCCA is superior to CCA in terms of both computational complexity and the runtime comparisons.
Figure 2:

Plots of runtime versus CR for the AT&T(ORL) data set of various group sizes: (a) small, (b) medium, and (c) large.

Figure 2:

Plots of runtime versus CR for the AT&T(ORL) data set of various group sizes: (a) small, (b) medium, and (c) large.

Close modal
Figure 3 plots the reconstructed images for the AT&T(ORL) data set with group sizes of the medium. This figure can be obtained by performing four methodologies when we set R1=R2=5 and R=2. By setting the number of the ranks in this way, we can compare the images with almost the same CR, PCA, CCA, and MPCA can recover the average structure of face images, but they cannot deal with changes in the angle of the face. MCCA can also recover the detailed differences in each image.
Figure 3:

The reconstructed images for the AT&T(ORL) data set with the medium group sizes under almost the same CR.

Figure 3:

The reconstructed images for the AT&T(ORL) data set with the medium group sizes under almost the same CR.

Close modal

6.3  Behavior of Contraction Ratio

We examined the behavior of contraction ratio α(k). We performed MCCA on the AT&T(ORL) data set with the medium group size and computed α(1) and α(2) with the various pairs of reduced dimensions (R1,R2){1,2,,25}×{1,2,,25}.

Figure 4 shows the values of α(1) and α(2) for all pairs of R1 and R2. As shown, α(1) and α(2) were invariant to variations in R2 and R1, respectively. Therefore, to facilitate visualization of changes in α(k), we draw Figure 5, which represents α(1) and α(2) for, respectively, R2=1 and R1=1. From these, we observe that when both R1 and R2 are greater than eight, both α(1) and α(2) are close to one.
Figure 4:

α(1) and α(2) versus pairs of reduced dimensions (R1,R2).

Figure 4:

α(1) and α(2) versus pairs of reduced dimensions (R1,R2).

Close modal
Figure 5:

α(1) and α(2) versus R1 and R2, respectively.

Figure 5:

α(1) and α(2) versus R1 and R2, respectively.

Close modal

6.4  Efficacy of Solving the Quadratic Programming Problem

We investigated the usefulness of determining the initial value of w(k) by solving the quadratic programming problem 4.1. We applied MCCA to the AT&T(ORL) data set with the small, medium, and large number of groups. In addition, we used the smaller group size of three. For determining the initial value of w(k), we consider three methods: solving the quadratic programming problem 4.1 (MCCA:QP); setting all values of w(k) to one (MCCA:FIX); and setting the values by random sampling according to the uniform distribution U(0,1) (MCCA:RANDOM). We computed the α(k) with the reduced dimensions R1=R2(1,2,,10) for each of these methods.

To evaluate the performance of these methods, we compared the values of α(k) and the number of iterations in the estimation. The number of iterations in the estimation is the number of repetitions of lines 7 to 9 in algorithm 1. For MCCA(RANDOM), we performed 50 trials and calculated averages of each of these indices.

Figure 6 shows the comparisons of α(1) and α(2) when the initialization was performed by MCCA:QP, MCCA:FIX, and MCCA:RANDOM for the AT&T(ORL) data set with a group size of three. It was confirmed that MCCA:QP provides the largest values of α(1) and α(2). Figure 7 shows the number of iterations. MCCA:QP gives the smallest number of iterations for almost all values of the reduced dimensions. This result indicates that MCCA:QP converges to a solution faster than the other initialization methods. However, when the reduced dimension is greater than or equal to eight, the other methods are competitive with MCCA:QP. A lack of difference in the number of iterations could result from the closeness of the initial values and the global optimal solution. Note that when the R1 and R2 are greater than or equal to eight, α(1) and α(2) are sufficiently close to one, based on Figure 6. This indicates that the initial values are close to the global optimal solution obtained from theorem 1. Hence, the result shows almost the same number of iterations for the three methods.
Figure 6:

Comparisons of α(1) and α(2) computed by using the initial values obtained from the initializations MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set for a group size of three.

Figure 6:

Comparisons of α(1) and α(2) computed by using the initial values obtained from the initializations MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set for a group size of three.

Close modal
Figure 7:

Comparison of the number of iterations when the initialization was performed by MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set for a group size of three.

Figure 7:

Comparison of the number of iterations when the initialization was performed by MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set for a group size of three.

Close modal
Figures 8 and 9 show comparisons for the AT&T(ORL) data set with the medium group size. Since the figures for the results of other group sizes are similar to Figures 8 and 9, we show them in the supplementary materials S2. Figure 8 shows results similar those in Figure 6, whereas Figure 9 shows competitive performances for all reduced dimensions.
Figure 8:

Comparisons of α(1) and α(2) computed using the initial values obtained from the initialization of MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set and the medium group size.

Figure 8:

Comparisons of α(1) and α(2) computed using the initial values obtained from the initialization of MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set and the medium group size.

Close modal
Figure 9:

Comparison of the number of iterations when the initialization was perfomed by MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set and the medium group size.

Figure 9:

Comparison of the number of iterations when the initialization was perfomed by MCCA:QP, MCCA:FIX, and MCCA:RANDOM with the AT&T(ORL) data set and the medium group size.

Close modal

We have developed the multilinear common components analysis (MCCA) by introducing a covariance structure based on the Kronecker product. To efficiently solve the nonconvex optimization problem for MCCA, we have proposed an iteratively updating algorithm that exhibits some superior theoretical convergence properties. Numerical experiments have shown the usefulness of MCCA.

Specifically, MCCA was shown to be competitive among the initialization methods in terms of the number of iterations. As the number of groups increases, the overall number of samples increases. This may be the reason why all methods required almost the same number of iterations for small, medium, and large groups. Note that in this study, we used the Kronecker product representation to estimate the covariance matrix for tensor data sets. Greenewald, Zhou, and Hero (2019) used the Kronecker sum representation for estimating the covariance matrix, and it would be interesting to extend the MCCA to this and other covariance representations.

We provide two basic lemmas about Kronecker products before we prove lemma 1.

Lemma 2.
For matrices A,B,C, and D such that matrix products AC and BD can be calculated, the following equation holds:
Lemma 3.
For square matrices A and B, the following equation holds:

These lemmas are known as the mixed-product property and the spectrum property, respectively. See Harville (1998) for detailed proofs.

Proof of Lemma 1.
For the maximization problem 3.5, move the summation over index g out of the tr(·) and replace S(g)* and V* with S(g)(1)S(g)(2)S(g)(M) and V(1)V(2)V(M), respectively. Then
By lemmas 2 and 3, we have
This leads to the maximization problem in lemma 1.

Theorem 1 can be easily shown from the following lemma.

Lemma 4.
Consider the maximization problem
(B.1)
Let M(k)=trg=1Gw(g)(-k)S(g)(k)S(g)(k). Then
Proof of Lemma 4.
First, we prove fk(V(k))fk'(V(k)). For any orthogonal matrix V(k)RPk×Rk, we can always find an orthogonal matrix V(k)RPk×(Pk-Rk) that satisfies V(k)V(k)=O. Then the equation V(k)V(k)+V(k)V(k)=IPk holds. By definition,
Thus, we have obtained fk(V(k))fk'(V(k)).
Next, we prove fk'(V(k))2M(k)fk(V(k)). We define the following block matrices:
Note that since S(g)(k) is a symmetric positive-definite matrix, S(g)(k) can be decomposed to S(g)(k)12S(g)(k)12. We calculate the traces of AA, AB, and BB, respectively:
From the Cauchy–Schwarz inequality, we have
By dividing both sides of the inequality by M(k), we obtain fk'(V(k))2M(k)fk(V(k)).
Proof of Theorem 1.
Let fk'max be the global maximum of fk'(V(k)) and V0(k)=argmaxV(k)fk'(V(k)). From lemma 4 and the definition of α(k), we have
Let fkmax be the global maximum of fk(V(k)). It then holds that fk(V0(k))fkmax. Thus,
Let V0*(k)=argmaxV(k)fk(V(k)). From lemma 4, we have
Since fk'(V0*(k))fk'max, we have
Hence, we have obtained α(k)fk'maxfkmaxfk'max.
Proof of Theorem 2.
By definition
(C.1)
By using the eigenvalue representation, we can rewrite the numerator of α(k) as
The denominator of α(k) can be represented as the sum of eigenvalues as follows:
Thus, we can transform α(k) as follows:
When we set
we can reformulate α(k) as
Thus, we obtain the following maximization problem:
Note that the constraints can be obtained by the definition of w(k). In addition, this maximization problem can be reformulated as
Since λ0(k)w(k)/λ1(k)w(k) is nonnegative, solving the optimization problem for the squared function of the objective function maintains generality. Thus, we can consider the following minimization problem:
Additionally, from the invariance under multiplication of w(k) by a constant, we obtain the following objective function of the quadratic programming problem:
Proof of Theorem 3.
We define the following block matrices:
Here, we calculate the traces of AsAs, AsAs+1, and As+1As+1. The calculations of trAsAs and trAs+1As+1 are the same as that of trAA by replacing V(k) with Vs(k) and V(k) with Vs+1(k), respectively, in lemma 4. Thus, we obtain
Since Vs+1(k)=argmaxV(k)trV(k)g=1Gw(g)(-k)S(g)(k)Vs(k)Vs(k)S(g)(k)V(k), we have
From the positivity of both sides of the inequality, it holds that
In addition, from the Cauchy–Schwarz inequality, we have
Thus,
Then, we have obtained fk(Vs(k))2fk(Vs(k))fk(Vs+1(k)). By dividing both sides of the inequality by fk(Vs(k)), we obtain the inequality fk(Vs(k))fk(Vs+1(k)).

We thank the reviewer for helpful comments and constructive suggestions. S.K. was supported by JSPS KAKENHI grants JP19K11854 and JP20H02227, and MEXT KAKENHI grants JP16H06429, JP16K21723, and JP16H06430.

Allen
,
G.
(
2012
).
Sparse higher-order principal components analysis.
In
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics
(pp.
27
36
).
Badeau
,
R.
, &
Boyer
,
R.
(
2008
).
Fast multilinear singular value decomposition for structured tensors
.
SIAM Journal on Matrix Analysis and Applications
,
30
(
3
),
1008
1021
.
Bensmail
,
H.
, &
Celeux
,
G.
(
1996
).
Regularized gaussian discriminant analysis through eigenvalue decomposition
.
Journal of the American Statistical Association
,
91
(
436
),
1743
1748
.
Boik
,
R. J.
(
2002
).
Spectral models for covariance matrices.
Biometrika
,
89
(
1
),
159
182
.
Carroll
,
J. D.
, &
Chang
,
J.-J.
(
1970
).
Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition
.
Psychometrika
,
35
(
3
),
283
319
.
Flury
,
B. N.
(
1984
).
Common principal components in K groups
.
Journal of the American Statistical Association
,
79
(
388
),
892
898
.
Flury
,
B. N.
(
1986
).
Asymptotic theory for common principal component analysis
.
Annals of Statistics
,
14
(
2
),
418
430
.
Flury
,
B. N.
(
1988
).
Common principal components and related multivariate models
.
New York
:
Wiley
.
Flury
,
B. N.
, &
Gautschi
,
W.
(
1986
).
An algorithm for simultaneous orthogonal transformation of several positive definite symmetric matrices to nearly diagonal form
.
SIAM Journal on Scientific and Statistical Computing
,
7
(
1
),
169
184
.
Greenewald
,
K.
,
Zhou
,
S.
, &
Hero III
,
A.
(
2019
).
Tensor graphical Lasso (TeraLasso)
.
Journal of the Royal Statistical Society: Series B (Statistical Methodology)
,
81
(
5
),
901
931
.
Harshman
,
R. A.
(
1970
).
Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis
.
UCLA Working Papers in Phonetics
,
16
(
1
), 84.
Harville
,
D. A.
(
1998
).
Matrix algebra from a statistician's perspective
.
New York
:
Springer-Verlag
.
Jolliffe
,
I.
(
2002
).
Principal component analysis
.
New York
:
Springer-Verlag
.
Kermoal
,
J. P.
,
Schumacher
,
L.
,
Pedersen
,
K. I.
,
Mogensen
,
P. E.
, &
Frederiksen
,
F.
(
2002
).
A stochastic MIMO radio channel model with experimental validation
.
IEEE Journal on Selected Areas in Communications
,
20
(
6
),
1211
1226
.
Kiers
,
H. A.
(
2000
).
Towards a standardized notation and terminology in multiway analysis
.
Journal of Chemometrics
,
14
(
3
),
105
122
.
Kolda
,
T. G.
, &
Bader
,
B. W.
(
2009
).
Tensor decompositions and applications
.
SIAM Review
,
51
(
3
),
455
500
.
Lai
,
Z.
,
Xu
,
Y.
,
Chen
,
Q.
,
Yang
,
J.
, &
Zhang
,
D.
(
2014
).
Multilinear sparse principal component analysis
.
IEEE Transactions on Neural Networks and Learning Systems
,
25
(
10
),
1942
1950
.
Lecun
,
Y.
,
Bottou
,
L.
,
Bengio
,
Y.
, &
Haffner
,
P.
(
1998
).
Gradient-based learning applied to document recognition
. In
Proceedings of the IEEE
,
86
(
11
),
2278
2324
.
Lu
,
H.
,
Plataniotis
,
K. N.
, &
Venetsanopoulos
,
A. N.
(
2008
).
MPCA: Multilinear principal component analysis of tensor objects
.
IEEE Transactions on Neural Networks
,
19
(
1
),
18
39
.
Manly
,
B. F. J.
, &
Rayner
,
J. C. W.
(
1987
).
The comparison of sample covariance matrices using likelihood ratio tests
.
Biometrika
,
74
(
4
),
841
847
.
Martinez
,
A.
, &
Benavente.
,
R.
(
1998
).
The AR face database
(CVC Technical Report 24).
Martinez
,
A. M.
, &
Kak
,
A. C.
(
2001
).
PCA versus LDA
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
23
(
2
),
228
233
.
Park
,
H.
, &
Konishi
,
S.
(
2020
).
Sparse common component analysis for multiple high-dimensional datasets via noncentered principal component analysis
.
Statistical Papers
,
61
,
2283
2311
.
Pearson
,
K.
(
1901
).
LIII. On lines and planes of closest fit to systems of points in space
.
London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science
,
2
(
11
),
559
572
.
Pourahmadi
,
M.
,
Daniels
,
M. J.
, &
Park
,
T.
(
2007
).
Simultaneous modelling of the Cholesky decomposition of several covariance matrices
.
Journal of Multivariate Analysis
,
98
(
3
),
568
587
.
R Core Team
(
2019
).
R: A language and environment for statistical computing
.
Vienna, Austria
:
R Foundation for Statistical Computing
.
Wang
,
H.
,
Banerjee
,
A.
, &
Boley
,
D.
(
2011
).
Common component analysis for multiple covariance matrices.
In
Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
(pp.
956
964
).
New York
:
ACM
.
Wang
,
S.
,
Sun
,
M.
,
Chen
,
Y.
,
Pang
,
E.
, &
Zhou
,
C.
(
2012
).
STPCA: Sparse tensor principal component analysis for feature extraction.
In
Proceedings of the 21st International Conference on Pattern Recognition
(pp.
2278
2281
).
Piscataway, NJ
:
IEEE
.
Werner
,
K.
,
Jansson
,
M.
, &
Stoica
,
P.
(
2008
).
On estimation of covariance matrices with Kronecker product structure
.
IEEE Transactions on Signal Processing
,
56
(
2
),
478
491
.
Yu
,
K.
,
Bengtsson
,
M.
,
Ottersten
,
B.
,
McNamara
,
D.
,
Karlsson
,
P.
, &
Beach
,
M.
(
2004
).
Modeling of wide-band MIMO radio channels based on NLOS indoor measurements
.
IEEE Transactions on Vehicular Technology
,
53
(
3
),
655
665
.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.