Abstract
We consider the problem of extracting a common structure from multiple tensor data sets. For this purpose, we propose multilinear common component analysis (MCCA) based on Kronecker products of mode-wise covariance matrices. MCCA constructs a common basis represented by linear combinations of the original variables that lose little information of the multiple tensor data sets. We also develop an estimation algorithm for MCCA that guarantees mode-wise global convergence. Numerical studies are conducted to show the effectiveness of MCCA.
1 Introduction
Various statistical methodologies for extracting useful information from a large amount of data have been studied over the decades since the appearance of big data. In the present era, it is important to discover a common structure of multiple data sets. In an early study, Flury (1984) focused on the structure of the covariance matrices of multiple data sets and discussed the heterogeneity of the structure. The author reported that population covariance matrices differ among multiple data sets in practical applications. Many methodologies have been developed for treating the heterogeneity between covariance matrices of multiple data sets (see, Flury, 1986, 1988; Flury & Gautschi, 1986; Pourahmadi, Daniels, & Park, 2007; Wang, Banerjee, & Boley, 2011; Park & Konishi, 2020).
Among such methodologies, common component analysis (CCA; Wang et al., 2011) is an effective tool for statistics. The central idea of CCA is to reduce the number of dimensions of data while losing as little information of the multiple data sets as possible. To reduce the number of dimensions, CCA reconstructs the data with a few new variables that are linear combinations of the original variables. For considering the heterogeneity between covariance matrices of multiple data sets, CCA assumes that there is a different covariance matrix for each data set. There have been many papers on various statistical methodologies using multiple covariance matrices: discriminant analysis (Bensmail & Celeux, 1996), spectral decomposition (Boik, 2002), and a likelihood ratio test for multiple covariance matrices (Manly & Rayner, 1987). It should be noted that principal component analysis (PCA) (Pearson, 1901; Jolliffe, 2002) is a technique similar to CCA. In fact, CCA is a generalization of PCA; PCA can only be applied to one data set, whereas CCA can be applied to multiple data sets.
Meanwhile, in various fields of research, including machine learning and computer vision, the main interest has been in tensor data, which has a multidimensional array structure. In order to apply the conventional statistical methodologies, such as PCA, to tensor data, a simple approach is to first transform the tensor data into vector data and then apply the methodology. However, such an approach causes the following problems:
In losing the tensor structure of the data, the approach ignores the higher-order inherent relationships of the original tensor data.
Transforming tensor data to vector data substantially increases the number of features. It also has a high computational cost.
To overcome these problems, statistical methodologies for tensor data analyses have been proposed that take the tensor structure of the data into consideration. Such methods enable us to accurately extract higher-order inherent relationships in a tensor data set. In particular, many existing statistical methodologies have been extended for tensor data, for example, multilinear principal component analysis (MPCA) (Lu et al., 2008) and sparse PCA for tensor data analysis (Allen, 2012; Wang, Sun, Chen, Pang, & Zhou, 2012; Lai, Xu, Chen, Yang, & Zhang, 2014), as well as others (see Carroll & Chang, 1970; Harshman, 1970; Kiers, 2000; Badeau & Boyer, 2008; Kolda & Bader, 2009).
In this letter, we extend CCA to tensor data analysis, proposing multilinear common component analysis (MCCA). MCCA discovers the common structure of multiple data sets of tensor data while losing as little of the information of the data sets as possible. To identify the common structure, we estimate a common basis constructed as linear combinations of the original variables. For estimating the common basis, we develop a new estimation algorithm based on the idea of CCA. In developing the estimation algorithm, two issues must be addressed: the convergence properties of the algorithm and its computational cost. To determine the convergence properties, we investigate first the relationship between the initial values of the parameters and global optimal solution and then the monotonic convergence of the estimation algorithm. These analyses reveal that our proposed algorithm guarantees convergence of the mode-wise global optimal solution under some conditions. To analyze the computational efficacy, we calculate the computational cost of our proposed algorithm.
The rest of the letter is organized as follows. In section 2, we review the formulation and the minimization problem of CCA. In section 3, we formulate the MCCA model by constructing the covariance matrices of tensor data, based on a Kronecker product representation. Then we formulate the estimation algorithm for MCCA in section 4. In section 5, we present the theoretical properties for our proposed algorithm and analyze the computational cost. The efficacy of the MCCA is demonstrated through numerical experiments in section 6. Concluding remarks are presented in section 7. Technical proofs are provided in the appendixes. Our implementation of MCCA and supplementary materials are available at https://github.com/yoshikawa-kohei/MCCA.
2 Common Component Analysis
where denotes the trace of a matrix. A crucial issue for solving the maximization problem 2.4 is the nonconvexity. Certainly the maximization problem is nonconvex since the problem is defined on a set of orthogonal matrices, which is a nonconvex set. Generally it is difficult to find the global optimal solution in nonconvex optimization problems. To overcome this drawback, Wang et al. (2011) proposed an estimation algorithm in which the estimated parameters are guaranteed to constitute the global optimal solution under some conditions.
3 Multilinear Common Component Analysis
In this section, we introduce a mathematical formulation of the MCCA, an extension of the CCA in terms of tensor data analysis. Moreover, we formulate an optimization problem of MCCA and investigate its convergence properties.
To solve the maximization problem efficiently and identify the inherent relationships, the maximization problem 3.5 can be decomposed into the mode-wise maximization problems represented in the following lemma.
4 Estimation
Our estimation algorithm consists of two steps: initializing the parameters and iteratively updating the parameters. The initialization step gives us the initial values of the parameters near the global optimal solution for each mode. Next, by iteratively updating the parameters, we can monotonically increase the value of the objective function 3.7 until convergence.
4.1 Initialization
Using the initial value of , we can obtain the initial value of the parameter by maximizing for each mode. The maximizer consists of eigenvectors, corresponding to the largest eigenvalues, obtained by eigenvalue decomposition of . The theoretical justification for this initialization is discussed in section 5.
4.2 Iterative Update of Parameters
Our estimation procedure comprises the above estimation steps. The procedure is summarized as algorithm 1.
5 Theory
This section presents the theoretical and computational analyses for algorithm 1. Theoretical analyses consist of two steps. First, we prove that the initial values of parameters obtained in section 4.1 are relatively close to the global optimal solution. If the initial values are close to the global maximum, then we can obtain the global optimal solution even if the maximization problem is nonconvex. Second, we prove that the iterative updates of the parameters in section 4.2 monotonically increase the value of objective function 3.7 by solving the surrogate problem 4.3. From the monotonically increasing property, the estimated parameters always converge at a stationary point. The combination of these two results enables us to obtain the mode-wise global optimal solution. In the computational analysis, we calculate computational cost for MCCA and then compare the cost with conventional methods. By comparing the costs, we investigate the computational efficacy of MCCA.
5.1 Analysis of Upper and Lower Bounds
This section aims to provide the upper and lower bounds of the maximization problem 3.7. From the bounds, we find that the initial values in section 4.1 are relatively close to the global optimal solution. Before providing the bounds, we define a contraction ratio.
Note that a contraction ratio satisfies and if and only if .
Using and the contraction ratio , we have the following theorem that reveals the upper and lower bounds of the global maximum in problem 3.7.
This theorem indicates that when . Thus, it is important to obtain an that is as close as possible to one. Since depends on and , depends on . From this dependency, if we could set the initial value of such that is as large as possible, then we could obtain an initial value of that attains a value near . The following theorem shows that we can compute the initial value of such that is maximized.
In fact, is very close to one with the initial values given in theorem 2 even if is small. This resembles the cumulative contribution ratio in PCA.
5.2 Convergence Analysis
We next verify that our proposed procedure for iteratively updating parameters maximizes the optimization problem 3.7. In algorithm 1, the parameter can be obtained by solving the surrogate maximization problem 4.3. Theorem 3 shows that we can monotonically increase the value of the function in equation 3.7 by algorithm 1.
From theorem 1, we obtain initial values of the parameters that are near the global optimal solution. By combining theorems 1 and 3, the solution from algorithm 1 can be characterized by the following corollary.
Consider the maximization problem 3.7. Suppose that the initial value of the parameter is obtained by = , and the parameter is repeatedly updated by algorithm 1. Then the mode-wise global maximum for the maximization problem 3.7 is achieved when all the contraction ratios for go to one.
Algorithm 1 does not guarantee the global solution due to the fundamental problem of nonconvexity, but it is enough for pragmatic purposes. We investigate the issue of convergence to global solution through numerical studies in section 6.3.
5.3 Computational Analysis
First, we analyze the computational cost. To simplify the analysis, we assume for . This implies that is the upper bound of for all . We then calculate the upper bound of the computational complexity.
The expensive computations of the each iteration in algorithm 1 consist of three parts: the formulation of , the eigenvalue decomposition of , and updating latent covariance matrices . These steps are , , and , respectively. The total computational complexity per iteration is then .
Next, we analyze the memory requirement of algorithm 1. MCCA represents the original tensor data with fewer parameters by projecting the data onto a lower-dimensional space. This requires the projection matrices for . MCCA projects the data with size of to , where . Thus, the required size for the parameters is . MPCA requires the same amount of memory as MCCA. Meanwhile, CCA and PCA need a projection matrix, which is size . The required size for the parameters is then .
To compare the computational cost clearly, the upper bounds of computational complexity and the memory requirement are summarized in Table 1. Table 1 shows that the computational complexity of MCCA is superior to that of the other algorithms and the complexity of MCCA is not limited by sample size. In contrast, the MPCA algorithm is affected by the sample size (Lu, Plataniotis, & Venetsanopoulos, 2008). Additionally, MCCA and MPCA require a large amount of memory when the number of modes in a data set is large, but their memory requirements are much smaller than those of PCA and CCA.
Method . | Computational Complexity . | Memory Reqirement . |
---|---|---|
PCA | ||
CCA | ||
MPCA | ||
MCCA |
Method . | Computational Complexity . | Memory Reqirement . |
---|---|---|
PCA | ||
CCA | ||
MPCA | ||
MCCA |
6 Experiment
To demonstrate the efficacy of MCCA, we applied MCCA, PCA, CCA, and MPCA to image compression tasks.
6.1 Experimental Setting
For the experiments, we prepared the following three image data sets:
MNIST data set consists of data of handwritten digits at image sizes of pixels. The data set includes a training data set of 60,000 images and a test data set of 10,000 images. We used the first 10 training images of the data set for each group. The MNIST data set (Lecun, Bottou, Bengio, & Haffner, 1998) is available at http://yann.lecun.com/exdb/mnist/.
AT&T (ORL) face data set contains gray-scale facial images of 40 people. The data set has 10 images sized pixels for each person. We used images resized by a factor of 0.5 to improve the efficiency of the experiment. The AT&T face data set is available at https://git-disl.github.io/GTDLBench/datasets/att_face_dataset/. All the credits of this data set go to AT&T Laboratories Cambridge.
Cropped AR database has color facial images of 100 people. These images are cropped around the face. The size of images is pixels. The data set contains 26 images in each group, 12 of which are images of people wearing sunglasses or scarves. We used the cropped facial images of 50 males who were not wearing sunglasses or scarves. Due to memory limitations, we resized these images by a factor of 0.25. The AR database (Martinez & Benavente, 1998; Martinez & Kak, 2001) is available at http://www2.ece.ohio-state.edu/∼aleix/ARdatabase.html.
The data set characteristics are summarized in Table 2.
Data Set . | Group Size . | Sample Size (/Group) . | Number of Dimensions . | Number of Groups . |
---|---|---|---|---|
MNIST | Small | 10 | 10 | |
AT&T(ORL) | Small | 10 | 10 | |
Medium | 20 | |||
Large | 40 | |||
Cropped AR | Small | 14 | 10 | |
Medium | 25 | |||
Large | 50 |
Data Set . | Group Size . | Sample Size (/Group) . | Number of Dimensions . | Number of Groups . |
---|---|---|---|---|
MNIST | Small | 10 | 10 | |
AT&T(ORL) | Small | 10 | 10 | |
Medium | 20 | |||
Large | 40 | |||
Cropped AR | Small | 14 | 10 | |
Medium | 25 | |||
Large | 50 |
To compress these images, we performed dimensionality reductions by MCCA, PCA, CCA, and MPCA, as follows. We vectorized the tensor data set before performing PCA and CCA. In MCCA, the images were compressed and reconstructed according to the following steps:
Prepare the multiple image data sets for .
Compute the covariance matrix of for .
From these covariance matrices, compute the linear transformation matrices for for mapping to the -dimensional latent space.
Map the th sample to , where the operator is the -mode product of tensor (Kolda & Bader, 2009).
Reconstruct th sample = .
Meanwhile, PCA and MPCA each require a single data set. Thus, we aggregated the data sets as and performed PCA and MPCA for data set .
6.2 Performance Assessment
For MCCA and MPCA, the reduced dimensions and were chosen as the same number, and then we fixed as two. All computations were performed by the software R (version 3.6) (R Core Team, 2019). In the initialization of MCCA, solving the quadratic programming problem was carried out using the function ipop in the package kernlab. MPCA was implemented as the function mpca in the package rTensor. (The implementations of MCCA, PCA, and CCA are available at https://github.com/yoshikawa-kohei/MCCA.)
From Figure 1, we observe that the RER material MCCA is the smallest for any value of CR. This indicates that MCCA performs better than the other methods. In addition, note that CCA performs better than MPCA only for fairly small values of CR, even though it is a method for vector data, whereas MPCA performs better for larger values of CR. This implies the limitations of CCA for vector data.
Next we consider group size by comparing panels a, b, and c in Figure 1. The value of CR at the intersection of CCA and MPCA increases with increasing the group size. This indicates that MPCA has more trouble extracting an appropriate latent space as the group size increases. Since MPCA does not consider the group structure, it is not possible to properly estimate the covariance structure when the group size is large.
6.3 Behavior of Contraction Ratio
We examined the behavior of contraction ratio . We performed MCCA on the AT&T(ORL) data set with the medium group size and computed and with the various pairs of reduced dimensions .
6.4 Efficacy of Solving the Quadratic Programming Problem
We investigated the usefulness of determining the initial value of by solving the quadratic programming problem 4.1. We applied MCCA to the AT&T(ORL) data set with the small, medium, and large number of groups. In addition, we used the smaller group size of three. For determining the initial value of , we consider three methods: solving the quadratic programming problem 4.1 (MCCA:QP); setting all values of to one (MCCA:FIX); and setting the values by random sampling according to the uniform distribution (MCCA:RANDOM). We computed the with the reduced dimensions for each of these methods.
To evaluate the performance of these methods, we compared the values of and the number of iterations in the estimation. The number of iterations in the estimation is the number of repetitions of lines 7 to 9 in algorithm 1. For MCCA(RANDOM), we performed 50 trials and calculated averages of each of these indices.
7 Conclusion
We have developed the multilinear common components analysis (MCCA) by introducing a covariance structure based on the Kronecker product. To efficiently solve the nonconvex optimization problem for MCCA, we have proposed an iteratively updating algorithm that exhibits some superior theoretical convergence properties. Numerical experiments have shown the usefulness of MCCA.
Specifically, MCCA was shown to be competitive among the initialization methods in terms of the number of iterations. As the number of groups increases, the overall number of samples increases. This may be the reason why all methods required almost the same number of iterations for small, medium, and large groups. Note that in this study, we used the Kronecker product representation to estimate the covariance matrix for tensor data sets. Greenewald, Zhou, and Hero (2019) used the Kronecker sum representation for estimating the covariance matrix, and it would be interesting to extend the MCCA to this and other covariance representations.
Appendix A: Proof of Lemma 1
We provide two basic lemmas about Kronecker products before we prove lemma 1.
These lemmas are known as the mixed-product property and the spectrum property, respectively. See Harville (1998) for detailed proofs.
Appendix B: Proof of Theorem 1
Theorem 1 can be easily shown from the following lemma.
Appendix C: Proof of Theorem 2
Appendix D: Proof of Theorem 3
Acknowledgments
We thank the reviewer for helpful comments and constructive suggestions. S.K. was supported by JSPS KAKENHI grants JP19K11854 and JP20H02227, and MEXT KAKENHI grants JP16H06429, JP16K21723, and JP16H06430.