Abstract

Neural encoding and decoding provide perspectives for understanding neural representations of sensory inputs. Recent functional magnetic resonance imaging (fMRI) studies have succeeded in building prediction models for encoding and decoding numerous stimuli by representing a complex stimulus as a combination of simple elements. While arbitrary visual images were reconstructed using a modular model that combined the outputs of decoder modules for multiscale local image bases (elements), the shapes of the image bases were heuristically determined. In this work, we propose a method to establish mappings between the stimulus and the brain by automatically extracting modules from measured data. We develop a model based on Bayesian canonical correlation analysis, in which each module is modeled by a latent variable that relates a set of pixels in a visual image to a set of voxels in an fMRI activity pattern. The estimated mapping from a latent variable to pixels can be regarded as an image basis. We show that the model estimates a modular representation with spatially localized multiscale image bases. Further, using the estimated mappings, we derive encoding and decoding models that produce accurate predictions for brain activity and stimulus images. Our approach thus provides a novel means of revealing neural representations of stimuli by automatically extracting modules, which can be used to generate effective prediction models for encoding and decoding.

1.  Introduction

Predictive models provide a basis for our understanding of the neural representation of sensory inputs. Previous studies have been conducted based on the concept of neural encoding and decoding (Dayan & Abbott, 2001; Pereira, Mitchell, & Botvinick, 2009; Naselaris, Kay, Nishimoto, & Gallant, 2011). An encoding model is formulated to predict brain activity from a stimulus, whereas a decoding model is formulated to predict a stimulus from brain activity. Recent functional magnetic resonance imaging (fMRI) studies using statistical learning-based encoding models have succeeded in predicting fMRI activity patterns from visual images (Kay, Naselaris, Prenger, & Gallant, 2008). The use of decoding models has also been successful in predicting visual features and images from fMRI activity patterns (Kamitani & Tong, 2005, 2006; Miyawaki et al., 2008).

A challenge in such data-driven modeling is how to generalize the model to an arbitrary stimulus or numerous possible stimuli that were not used to train the model. It is not feasible to measure brain activity for all possible stimuli. The utility of a model is limited if it provides a mapping only between a limited set of stimuli and their measured neural responses. A solution has been proposed using a modular representation of the mapping between a stimulus and brain activity. Mitchell et al. (2008) presented an encoding model that predicted brain activity for arbitrary nouns represented by modular semantic features defined by frequently associated verbs. Once the mapping between each semantic feature and brain activity was learned using a set of nouns, the model was generalized to arbitrary nouns by decomposing them into semantic features that were combined to predict the brain activity patterns. Miyawaki et al. (2008) demonstrated that arbitrary visual images can be reconstructed from human fMRI using a modular decoding model. Modular decoders were trained to predict the mean contrasts of predefined multiscale image bases (, , and pixels, covering an entire image) using a set of random visual images and the neural responses recorded by fMRI. As each image basis has a small number of possible states, a small subset of all the possible random images was able to provide sufficient data to train the modular decoders. Once the decoder was trained, a visual image was reconstructed by linearly combining the image bases with the predicted contrasts. Note that in neuroimaging literature, a module often refers to a large contiguous region with functional specificity. Here, however, modules can be much smaller units corresponding to elemental features of a stimulus or behavior.

The performance of such a modular model is critically dependent on the selection of the elemental features used to compose a stimulus. In particular, although the multiscale image bases used in Miyawaki et al. (2008) outperformed alternative basis sets, they were heuristically selected, and thus not necessarily optimal. If image bases are automatically estimated from training data, they may reveal elemental features used by the brain, leading to better prediction performance.

In this letter, we propose a method to build encoding and decoding models while automatically determining image bases based on measured fMRI activity patterns related to visual images. We employ the framework of canonical correlation analysis (CCA) in which two multidimensional observations are related using a common coordinate system to maximize the correlations between transformed variables (Anderson, 2003). When applied to the mapping between a visual image and an fMRI activity pattern, CCA finds multiple correspondences from a weighted sum of pixels to a weighted sum of voxels, which constitute functional modules for visual image representation. The estimated pixel weights for each module can be regarded as an image basis, and the combination of the multiple modules allows us to represent a variety of visual images.

Because the early visual cortex is retinotopically organized, we can assume that a small set of pixels is represented by a small set of voxels. To facilitate the mapping between small sets of variables, we extend CCA to Bayesian CCA (Wang, 2007) with sparseness priors. Bayesian CCA treats multiple correspondences as latent variables, with two mapping matrices relating to two sets of observations. One matrix maps the latent variables to the visual image serving as a set of image bases, and the other matrix maps the latent variables to the fMRI voxels serving as a set of fMRI voxel weights. The matrices are assumed to be random variables with hyperparameters. We introduce a sparseness prior into each element of the matrices to ensure that only small subsets of voxels and pixels are related to nonzero matrix elements.

Once the mapping matrices are estimated, the Bayesian CCA model can be used to generate both encoding and decoding models. The encoding model can be derived by first inverting the mapping from the latent variables to the visual image and then combining the inverted mapping with that from the latent variables to the fMRI voxels. The decoding model can be derived in a reverse fashion. Using the encoding and decoding models derived by this framework, we can predict an fMRI activity pattern from a novel visual image and predict a visual image from a novel fMRI activity pattern.

We applied this framework to a data set consisting of random visual images and corresponding fMRI data used in our previous work (Miyawaki et al., 2008). The visual images were contrast-defined checkerboard patterns, and the fMRI signals were collected from area V1. Because the pixels in random images are not spatially correlated, the estimated image bases are expected to reflect intrinsic mappings between the visual field and brain activity rather than the correlations between pixels derived from a particular set of visual images. We show that Bayesian CCA estimates a modular representation consisting of spatially localized image bases and fMRI voxel weights. The estimated image bases had similar shapes to those used in our previous study (Miyawaki et al., 2008), although different shapes were also found. The fMRI voxel weights corresponding to the spatially localized image bases were spatially localized in the brain, consistent with the retinotopic organization of the visual cortex. The performance of the encoding and decoding models was evaluated by predicting fMRI activity patterns and visual images, respectively. We also demonstrate that the sparseness priors introduced into the Bayesian CCA model were important for estimating localized image bases and voxel weights. These results suggest that Bayesian CCA can effectively estimate the modular mapping between the stimulus and the brain, which can be used to generate prediction models for both encoding and decoding. A preliminary version of this study has been published in conference proceedings (Fujiwara, Miyawaki, & Kamitani, 2009), where we outlined the method for estimating image bases using Bayesian CCA.

2.  Methods

We constructed a model to relate an fMRI activity pattern and a visual image via latent variables (see Figure 1). A mapping from each latent variable to a set of visual image pixels can be regarded as an image basis. The latent variable also has links to a set of fMRI voxels, which serves as a voxel weight. Hence, the latent variable bundles a subset of fMRI voxels responding to a specific element of a visual image. We used the framework of canonical correlation analysis (CCA) to build the Bayesian CCA model for relating visual image pixels and fMRI voxels via latent variables. In the following, we introduce CCA and probabilistic CCA, a probabilistic reformulation of CCA (Bach & Jordan, 2005). Next, we derive our Bayesian CCA model, which allows sparse selection of visual image pixels and fMRI voxels for each link.

Figure 1:

Bayesian CCA models. (A) Illustration of the model framework. The visual image I (pixels) and an fMRI activity pattern r (voxels) are linked by latent variables z. The links from each latent variable to image pixels define an image basis (column vector of WI). The links from each latent variable to fMRI voxels represent a voxel weight (column vector of Wr). (B) Graphical representation of the model. Circles indicate model parameters to be estimated, and squares indicate observations. The matrices WI and Wr, the latent variable z, and the inverse variances and are simultaneously estimated using the variational Bayesian method. (C) The encoding model to predict a brain activity pattern r given an image I (thick line) derived from the parameters estimated in panel B. (D) The decoding model to predict an image I given a brain activity pattern r (thick line) derived from the parameters estimated in panel B.

Figure 1:

Bayesian CCA models. (A) Illustration of the model framework. The visual image I (pixels) and an fMRI activity pattern r (voxels) are linked by latent variables z. The links from each latent variable to image pixels define an image basis (column vector of WI). The links from each latent variable to fMRI voxels represent a voxel weight (column vector of Wr). (B) Graphical representation of the model. Circles indicate model parameters to be estimated, and squares indicate observations. The matrices WI and Wr, the latent variable z, and the inverse variances and are simultaneously estimated using the variational Bayesian method. (C) The encoding model to predict a brain activity pattern r given an image I (thick line) derived from the parameters estimated in panel B. (D) The decoding model to predict an image I given a brain activity pattern r (thick line) derived from the parameters estimated in panel B.

2.1.  Canonical Correlation Analysis

CCA is a statistical method for estimating multiple correspondences in a common coordinate system from two sets of observation data. We first consider CCA for estimating image bases given a visual image I and an fMRI activity pattern r. Let I(t) be an vector and r(t) be a vector, where N is the number of image pixels, K is the number of fMRI voxels, and t is a sample index. Both data sets are assumed to be independent and identically distributed (i.i.d.) samples. CCA assumes a linear combination of image pixel values and fMRI voxel values , and determines the combination coefficient vectors a1 and b1 such that the correlation between u1 and v1 is maximized. The variables u1 and v1 are called the first canonical variables, and the vectors a1 and b1 are the canonical coefficients. Next, the second canonical variables, and , are determined by maximizing the correlation between u2 and v2 and orthogonalizing them to the first canonical variables. This procedure is repeated until we reach some predefined number, M, which is conventionally set to the smaller dimension of the two sets of observations. In our case, M=N=100 because the visual images are pixels and the number of pixels is smaller than the number of fMRI voxels (N<K). The M sets of canonical variables are summarized as
formula
2.1
formula
2.2
where u(t) and v(t) are vectors, A is an matrix, and B is an matrix. Matrices A and B are obtained by solving the eigenvalue problem of the covariance matrix of I and r (Anderson, 2003). The visual image can be reconstructed by
formula
2.3
where each column vector of the pseudo-inverse matrix A−1 serves as an image basis since I(t) is represented by a linear combination of the column vectors of A−1.

2.2.  Probabilistic CCA

Probabilistic CCA (Bach & Jordan, 2005) introduces the common latent variables z, instead of the canonical variables u and v, that relate a visual image I to an fMRI activity pattern r. The visual image and the fMRI activity pattern are both generated from the common latent variable z through an image basis set WI and a voxel weight set Wr:
formula
2.4
formula
2.5
where WI is an matrix representing M image bases (N is the number of pixels), Wr is a matrix representing M voxel weights (K is the number of voxels), and z(t) is an vector. Although the visual image I is a variable controlled by the experimenter, it can also be considered as an observation, as can the fMRI activity pattern r. The vector and the vector represent the unknown observation noise of the visual image and the fMRI activity pattern.
In this framework, we can assume two likelihood functions: one for visual images that are generated from latent variables and the other for fMRI activity patterns that are generated from the same latent variables. When the noise levels of the two observations, and , are assumed to follow a multivariate gaussian distribution with zero mean and spherical covariance, the likelihood functions for the visual image I and fMRI activity pattern r are derived from the generative models (see equations 2.4 and 2.5),
formula
2.6
formula
2.7
where variables and are the inverse variances of the observation noise and , and T is the number of observations. In probabilistic CCA, the orthogonality of the canonical variables u and v is represented by a prior distribution of the latent variable z. The prior distribution is assumed to obey a multivariate gaussian distribution with an identical covariance matrix,
formula
2.8
which represents the independence and orthogonality of the latent variable z. WI and Wr are estimated in a similar fashion to CCA. The details are given in Bach and Jordan (2005).

2.3.  Bayesian CCA

We derive a Bayesian CCA model (Wang, 2007) based on probabilistic CCA that selects the relevant image pixels and fMRI voxels automatically. In addition to the visual image I, fMRI activity pattern r, and latent variables z, Bayesian CCA treats the image basis set WI and the voxel weight set Wr as random variables. This assumption provides a Bayesian estimation of image bases and voxel weights, and each variable is evaluated using a posterior distribution. We also introduce hyperprior distributions for the inverse variance of each element of the image bases and the voxel weights. The relationship between these parameter variables in Bayesian CCA is shown in Figure 1B. The image bases, the voxel weights, and the inverse variances are estimated by employing the variational Bayesian method (Attias, 1999). After the parameters are determined, encoding and decoding models can be derived as predictive distributions.

The likelihood functions and the prior distribution of the latent variables can be formulated in the same way as for probabilistic CCA (see equations 2.6 to 2.8). In addition, we assume the prior distributions of the image bases and the voxel weights,
formula
2.9
formula
2.10
where and are the inverse variances of elements of WI and Wr, respectively.
Moreover, we introduce hyperprior distributions for the inverse variances and ,
formula
2.11
formula
2.12
where represents the gamma distribution with mean and confidence parameter :
formula
2.13
and is the gamma function. In our analysis, the number of latent variables is set to M=N=100 (as the visual images are pixels), and the mean and the confidence parameter of the gamma distribution are set to 1 and 0, respectively.

This configuration of the prior and hyperprior distribution is known as automatic relevance determination (ARD) and has the effect that irrelevant parameters are automatically driven to zero (MacKay, 1994; Neal, 1996). In the current case, these priors and hyperpriors lead to a sparse selection of links from each latent variable to pixels and voxels. The use of this sparse prior may be validated by the fact that a spatially localized visual stimulus evokes activity in only a small number of voxels in the early visual area. The sparse parameter estimation avoids overfitting the training data by pruning irrelevant features, thereby helping to achieve high prediction performance (Tipping, 2001; Yamashita, Sato, Yoshioka, Tong, & Kamitani, 2008).

Prior distributions of the inverse variances of the observation noises are assumed to be uninformative priors:
formula
2.14
formula
2.15
Hereafter, the variance parameters are summarized as .

2.4.  Parameter Estimation of Bayesian CCA Employing the Variational Bayesian Method

The image bases and voxel weights are estimated as a posterior distribution, given the likelihood functions (see equations 2.6 and 2.7), the prior distributions (see equations 2.82.10, 2.14 and 2.15), and the hyperprior distributions (see equations 2.11 and 2.12). This posterior distribution is obtained by marginalizing the joint posterior distribution with respect to latent variables and variance parameters:
formula
2.16
where
formula
2.17
formula
2.18
The joint distribution (the right-hand side of equation 2.17) is constructed from equations 2.6 to 2.12, 2.14, and 2.15:
formula
2.19
Because the joint posterior distribution, equation 2.16, cannot be calculated analytically, we approximate it using a trial distribution based on the variational Bayesian (VB) method (Attias, 1999). In the VB method, a trial distribution is introduced to maximize the free energy, defined as
formula
2.20
We assume the trial distribution that can be factorized as
formula
2.21
Under this assumption, the free energy can be rewritten as
formula
2.22
The free energy is optimized by performing an iterative maximization of the distribution of the image bases and voxel weights , the distribution of the latent variable , and the distribution of the inverse variances . We repeat these calculations iteratively until the free energy converges. The VB method guarantees convergence to a local optimum (Bishop, 2006).
The update equations for estimating these distributions are obtained by taking the expectation of the joint distribution with respect to all parameters except that being updated (Attias, 1999). First, the trial distribution of the image bases and voxel weights is calculated. When all variables except WI and Wr are fixed, the trial distribution that best approximates the posterior distribution is obtained by taking the expectation of the logarithm of the joint distribution with respect to the latent variable z and variance parameter :
formula
2.23
The trial distribution can be factorized into because of the conditional independence when is fixed. Hence, the trial distribution of the image bases is obtained by rearranging the terms of equation 2.23 with respect to WI,
formula
2.24
where
formula
2.25
formula
2.26
and represents a gaussian distribution with mean and variance . The trial distribution is obtained similarly, replacing I with r, n with k, and N with K in equations 2.24 to 2.26,
formula
2.27
where
formula
2.28
formula
2.29
Second, the trial distribution of the latent variable is calculated. In a similar way to the estimation of , the trial distribution of the latent variable is obtained by taking the expectation of the logarithm of the joint distribution with respect to the image bases WI, voxel weights Wr, and variance parameters :
formula
2.30
The trial distribution is then obtained by rearranging the terms of equation 2.30 with respect to z,
formula
2.31
where
formula
2.32
formula
2.33
In equation 2.33, E is an identity matrix, and and are defined as
formula
2.34
formula
2.35
The third step is a calculation of the trial distribution of the inverse variances . Each inverse variance is estimated as a gamma distribution. This derivation is shown in the appendix.

2.5.  Predictive Distribution for the Encoding and Decoding Models

We can convert the Bayesian CCA model into either an encoding model (see Figure 1C) or a decoding model (see Figure 1D). The encoding model is obtained by calculating a predictive distribution P(rnew|Inew, I, r) for an fMRI activity pattern rnew given the visual image Inew. Note that Inew and rnew are taken from the data set reserved for testing the model, which is independent of the data set for estimating the model parameters. The predictive distribution P(rnew|Inew, I, r) is constructed from the likelihood function of the fMRI activity pattern P(rnew|Wr, znew) (see equation 2.7), the distribution of voxel weights (see equations 2.27 to 2.29), and the posterior distribution of the latent variables P(znew|Inew, I, r):
formula
2.36
To calculate equation 2.36, we estimate the posterior distribution of the latent variable given a new visual image P(znew|Inew, I, r). The posterior distribution P(znew|Inew, I, r) is obtained from the likelihood function of the visual image P(I|WI, z) (see equation 2.6), the distribution of the image bases (see equations 2.242.26), and the prior distribution of the latent variable P0(z) (see equation 2.8), using Bayes’ theorem:
formula
2.37
formula
2.38
formula
2.39
where
formula
2.40
formula
2.41
Because it is difficult to calculate the multiple integral in equation 2.36 analytically, we use a numerical sampling method. The independence between and P(znew|Inew, I, r) holds if the fMRI activity pattern rnew is not observed. Thus, we generate R samples from the distribution and R independent samples from P(znew|Inew, I, r) independently. Given these, we then generate R samples from the conditional distribution P(rnew|W(r)r, z(r)new, I, r) based on equation 2.7. Finally, we obtain a prediction of the fMRI activity pattern by calculating the sample average,
formula
2.42
In this case, we set R = 5000. For the decoding model, the predictive distribution for a new visual image Inew given a new fMRI activity pattern rnew is constructed in a similar way.

2.6.  fMRI Data

We used the data set obtained from Miyawaki et al. (2008) that contained fMRI signals of two subjects observing visual images consisting of contrast-defined checkerboard patches (the data set is available at our web site).1 Each patch was either a flickering checkerboard (spatial frequency, 1.74 cycle/deg; temporal frequency, 6 Hz) or a homogeneous gray area. The data set consisted of two independent sessions. One is a random image session, in which a spatially random pattern was presented for 6 s, followed by a 6 s rest period. A total of 440 different random patterns was presented to each subject. The other is a figure image session, where a letter of the alphabet or a simple geometric shape was presented for 12 s, followed by a 12 s rest period. Five letters of the alphabet and five geometric shapes were presented six or eight times per subject. To estimate the model parameters, we used a set of single fMRI volumes acquired every 2 s from V1 during the stimulus presentation period (stimulus labels were shifted by 4 s to compensate for the hemodynamic delay). The prediction performance of the model was tested using block-averaged data (average voxel intensities of 3 volumes [6 s] or 6 volumes [12 s]).

2.7.  Evaluation of Model Performance

We conducted 10-fold cross-validation analysis to evaluate the prediction performance of the encoding and decoding models derived from our Bayesian CCA approach. We used the data from the random image session to avoid possible biases due to specific shapes contained in the presented visual images. The data were divided into 10 groups, each consisting of 44 different random images and the corresponding fMRI data. The model parameters, including image bases and fMRI voxel weights, were estimated with 9 groups, and the predictions of the encoding and decoding models were tested on the one remaining group. The prediction models trained on random images were also tested with the data from the figure image session. The results were used only for the illustration of reconstruction quality.

To evaluate the encoding model, we performed an image identification analysis (Kay et al., 2008) in which the presented image was identified from among a candidate image set based on the fMRI activity patterns predicted by the encoding model. For each candidate image, the encoding model predicted an fMRI activity pattern, and the correlation coefficient between the predicted and measured fMRI activity patterns was calculated. The candidate image corresponding to the predicted fMRI activity pattern that best correlated with the measured pattern was selected as the identified image. The candidate set consisted of a presented image (true image) and a variable number of randomly generated images that were not used in the experiment. The set size of the randomly generated image set was increased from 10 to 1000 by one image step. We repeated the identification 200 times for each image and each set size to obtain the percentage of correct identifications. We then calculated the mean performance across images and the confidence interval for each set size.

We compared the performance of the encoding model derived from Bayesian CCA with that of a multiscale fixed image basis model and a voxel receptive field (RF) model. In the multiscale fixed image basis model, the image basis set WI of the Bayesian CCA model was replaced with the fixed image bases (361 multiscale image bases) used in Miyawaki et al. (2008), and the voxel weight set Wr was estimated from data. The voxel RF model predicted the intensity of each voxel by calculating a weighted average of the image contrasts filtered by the fixed image bases (mean contrasts within each image basis),
formula
2.43
where rk(t) is the k th voxel intensity at time t, x(t) is the image contrast filtered by the image bases ( vector), wk is a weight vector ( vector), and bk is a bias term. The weight vector and bias term are estimated using the regularized least squares method that minimizes the error function,
formula
2.44
where , , and is a regularization coefficient. was also estimated by a Bayesian treatment (Bishop, 2006).

To evaluate the performance of the decoding model, we reconstructed visual images from the fMRI activity patterns (Miyawaki et al., 2008). Reconstruction performance was quantified by the mean squared error in pixel values between the presented and reconstructed images. The performance was compared with that of the previous model using the multiscale fixed image bases (Miyawaki et al., 2008).

3.  Results

3.1.  Estimated Image Bases and fMRI Voxel Weights

Figure 2A shows representative image bases estimated by Bayesian CCA. The sign of the pixel values (darker or brighter) is not important here, because the same CCA model holds when the pixel and voxel weights have flipped signs. A black image basis should be interpreted as being an image basis associated with the voxel weights of flipped signs. The shapes were similar to those used in the previous study (Miyawaki et al., 2008) (, , and shown in the first and second rows of Figure 2A). However, we also found image bases with other shapes (e.g., L-shape, and , the third row of Figure 2A) that were not assumed in the previous study.

Figure 2:

Estimated image bases and fMRI voxel weights. (A) Representative estimated bases (left, sorted by number of pixels) and their frequency as a function of eccentricity (right, two subjects pooled). Pixel values of image bases are represented by a gray scale. (B) Representative estimated fMRI voxel weights corresponding to image bases (shown in the top left inset). The absolute values of voxel weights are mapped on the cortical surface of area V1. The left and right maps correspond to the right and left image bases, respectively. Eccentricity is indicated by white lines, which are identified by the conventional retinotopic mapping procedure. (C) Summary of the voxel weight distribution. Image bases in the right visual field are sorted by the eccentricity and the polar angle of the central position (horizontal axis, 0.5 degree bins for eccentricity and 10 degree bins for polar angle). Contralateral voxels (left hemisphere) were sorted by the corresponding eccentricity and polar angle identified by the conventional retinotopic mapping procedure (vertical axis, 0.5 degree bins for eccentricity and 10 degree bins for polar angle). The magnitude of voxel weights was averaged in each image basis location and cortical location for 10 models generated by cross-validation analysis (two subjects pooled). Similar results were observed for image bases in the left visual field and the contralateral voxels (right hemisphere).

Figure 2:

Estimated image bases and fMRI voxel weights. (A) Representative estimated bases (left, sorted by number of pixels) and their frequency as a function of eccentricity (right, two subjects pooled). Pixel values of image bases are represented by a gray scale. (B) Representative estimated fMRI voxel weights corresponding to image bases (shown in the top left inset). The absolute values of voxel weights are mapped on the cortical surface of area V1. The left and right maps correspond to the right and left image bases, respectively. Eccentricity is indicated by white lines, which are identified by the conventional retinotopic mapping procedure. (C) Summary of the voxel weight distribution. Image bases in the right visual field are sorted by the eccentricity and the polar angle of the central position (horizontal axis, 0.5 degree bins for eccentricity and 10 degree bins for polar angle). Contralateral voxels (left hemisphere) were sorted by the corresponding eccentricity and polar angle identified by the conventional retinotopic mapping procedure (vertical axis, 0.5 degree bins for eccentricity and 10 degree bins for polar angle). The magnitude of voxel weights was averaged in each image basis location and cortical location for 10 models generated by cross-validation analysis (two subjects pooled). Similar results were observed for image bases in the left visual field and the contralateral voxels (right hemisphere).

To evaluate the relationship between the size of the image bases and the corresponding locations in the visual field, we calculated the distribution of the image bases over eccentricity for different sizes (see Figure 2A, right). The image bases of the smallest size () were the most frequent, and most of them were found within 3 degrees of eccentricity. Larger-sized image bases were more frequently found in the peripheral visual field than in the foveal visual field. These results are consistent with the larger receptive field of visual cortical neurons or voxels for peripheral visual fields (Hubel & Wiesel, 1974; Smith, Singh, Williams, & Greenlee, 2001; Domoulin & Wandell, 2008).

In addition to image bases, Bayesian CCA simultaneously estimates fMRI voxel weights corresponding to each estimated image basis. Figure 2B shows two representative voxel weights corresponding to image bases located to the left and right of the fixation point (the corresponding image bases are shown in the top left of each cortical surface map). Voxel weights were mapped to the cortical surface using the retinotopic map for eccentricity (white lines) obtained in a separate retinotopy experiment using a conventional procedure (Sereno et al., 1995; Engel, Glover, & Wandell, 1997). Large voxel weights were localized in the cortical region corresponding to small eccentricity contralateral to the location of the image basis, consistent with the retinotopic location of the image basis.

The distribution of absolute weight values on the cortical surface for image bases is summarized in Figure 2C. Image bases were sorted by the eccentricity and the polar angle of the central position in the visual field, and voxels were sorted by the corresponding eccentricity and polar angle identified in the retinotopic mapping experiment. The absolute voxel weights were normalized by the maximum value and then averaged for each eccentricity or polar angle bin of the image basis. Large values were distributed along the diagonal, indicating that the cortical location of the voxel weights was congruent to the retinotopic map of the image bases.

3.2.  Evaluation of the Encoding Model by Identification Analysis

To evaluate the performance of the encoding model derived from Bayesian CCA, we used an image identification analysis (Kay et al., 2008) in which the encoding model produced predicted brain patterns for candidate visual images, and they were compared with the measured fMRI activity pattern. The visual image whose predicted fMRI activity pattern was best correlated with the measured fMRI activity pattern was selected. If the presented image was selected among the candidate images, the identification was successful (correct identification). We repeated this procedure for all presented images and a variable candidate set size. The performance achieved with the Bayesian CCA encoding model was compared to that with the fixed-basis model and the voxel RF model. In contrast to the Bayesian CCA model, the fixed-basis model lacks the ability to estimate images bases from data. The voxel RF model is trained to treat each voxel independently and is thus unable to take voxel correlations into account.

The identification performance of the Bayesian CCA model remained at a high level, exceeding 30% even for a set size of 1000 candidates (see Figure 3). That is, using the encoding model, we were able to correctly identify the presented visual image from among 1000 candidates with a probability of above 30%, whereas the chance level is 0.1% (=1/1000). Note that when we trained the Bayesian CCA model with shuffled data (the correspondence between visual images and fMRI volumes was shuffled), the performance was at the chance level. The fixed-basis model showed a comparable performance with the Bayesian CCA model for small set sizes but significantly lower performance for larger set sizes (more than 200). The voxel RF model showed much lower performance than the other models for all set sizes. We further extrapolated the identification performance by fitting the sigmoid function with logarithmic scales of set sizes. The extrapolation analysis suggests that the Bayesian CCA model could achieve an accuracy of 10% with a set size of 104.3, exceeding the performance of the fixed-basis and voxel RF models (10% accuracy at 104 and 103.2 set sizes, respectively). These results indicate that both the data-driven estimation of image bases and voxel weights contributed to the prediction performance of the encoding model.

Figure 3:

Image identification using encoding models. The mean percentage of correct identification for the 440 presented images is plotted as a function of the set size of candidate images for the encoding model derived from Bayesian CCA (red), the fixed-basis model (blue), and the voxel RF (green) model (shading, 95% confidence interval; two subjects pooled). The percentage of correct identifications for each presented image was calculated by repeating the identification analysis 200 times with randomly generated candidates. The dashed line indicates the chance level.

Figure 3:

Image identification using encoding models. The mean percentage of correct identification for the 440 presented images is plotted as a function of the set size of candidate images for the encoding model derived from Bayesian CCA (red), the fixed-basis model (blue), and the voxel RF (green) model (shading, 95% confidence interval; two subjects pooled). The percentage of correct identifications for each presented image was calculated by repeating the identification analysis 200 times with randomly generated candidates. The dashed line indicates the chance level.

3.3.  Evaluation of the Decoding Model by Visual Image Reconstruction

We also derived a decoding model from Bayesian CCA analysis and used it to reconstruct visual images from fMRI activity patterns. Reconstructed images from the figure image session, including letters of the alphabet and geometric shapes, are shown in Figure 4A. Images reconstructed by the Bayesian CCA decoding model captured the essential features of the presented images (see Figure 4A, second row). In particular, they showed fine reconstruction for figures consisting of thin lines, such as small frames and letters of the alphabet. However, the reconstruction was noisier than that of the previous model using multiscale fixed image bases (Miyawaki et al., 2008; see Figure 4A, third row). This is presumably because few image bases were estimated in the peripheral regions by Bayesian CCA (see Figure 2A, right). To evaluate the reconstruction performance quantitatively, we calculated the mean square errors in pixel contrast between the presented and reconstructed images using the data obtained in the random image session at each eccentricity (see Figure 4B). Whereas there was a small difference in the errors of the Bayesian CCA and fixed-basis models in foveal pixels, the fixed-basis model outperformed the Bayesian CCA model in parafoveal pixels. Both models made large errors in peripheral pixels, although the difference between the two was small.

Figure 4:

Visual image reconstruction. (A) Presented images (first row, letters of the alphabet and geometric shapes) and the reconstructed images obtained from the Bayesian CCA model (second row) and the fixed-basis model (third row). (B) Reconstruction errors were evaluated by calculating the mean square error between presented and reconstructed images at each eccentricity (error bar, 95% confidence interval; two subjects pooled).

Figure 4:

Visual image reconstruction. (A) Presented images (first row, letters of the alphabet and geometric shapes) and the reconstructed images obtained from the Bayesian CCA model (second row) and the fixed-basis model (third row). (B) Reconstruction errors were evaluated by calculating the mean square error between presented and reconstructed images at each eccentricity (error bar, 95% confidence interval; two subjects pooled).

3.4.  Effects of Sparseness Priors

Bayesian CCA extracted localized image bases and voxel weights using sparseness priors. To examine the advantages of using sparseness priors, we compared the image bases estimated by Bayesian CCA with those estimated under the following three conditions: CCA with sparseness priors for image bases alone, CCA with sparseness priors for voxel weights alone, and CCA without any sparseness priors. Figure 5A shows representative image bases estimated under these conditions. Overall, most of the image bases estimated by CCA lacking either or both of the sparseness priors were less spatially localized. A quantitative evaluation of the sparseness using kurtosis showed that the image bases estimated by Bayesian CCA were significantly sparser than those under the three conditions (see Figure 5B; Kruskal-Wallis test, p<0.01, Bonferroni corrected for multiple comparisons). These results indicate that both sparseness priors contribute to the estimation of localized image bases.

Figure 5:

Effects of sparseness priors. (A) Examples of image bases estimated by Bayesian CCA, CCA with a sparseness prior for voxel weights, CCA with a sparseness prior for image bases, and ordinary CCA. (B) Sparseness of the estimated bases. Kurtosis of each image basis was calculated as the sparseness index (error bar, 95% confidence interval; two subjects pooled). (C) Reconstruction errors between presented and reconstructed images (error bar, 95% confidence interval; two subjects pooled).

Figure 5:

Effects of sparseness priors. (A) Examples of image bases estimated by Bayesian CCA, CCA with a sparseness prior for voxel weights, CCA with a sparseness prior for image bases, and ordinary CCA. (B) Sparseness of the estimated bases. Kurtosis of each image basis was calculated as the sparseness index (error bar, 95% confidence interval; two subjects pooled). (C) Reconstruction errors between presented and reconstructed images (error bar, 95% confidence interval; two subjects pooled).

We also examined the reconstruction performance using the nonlocalized image bases and compared it with the reconstruction performance using localized image bases estimated by Bayesian CCA. All three of the CCA models lacking sparseness priors had significantly poorer reconstruction performance than the Bayesian CCA model (see Figure 5C; ANOVA, p<0.01, Bonferroni corrected for multiple comparisons). The CCA model without any sparseness priors exhibited the worst performance. Thus, the sparseness priors for both image bases and voxel weights contribute to attaining an accurate reconstruction performance, too.

4.  Discussion

We have proposed a new method for estimating encoding and decoding models based on a unified framework using Bayesian canonical correlation analysis (CCA). Bayesian CCA finds multiple correspondences between visual image pixels and fMRI voxels via latent variables. Using this model, we were able to estimate spatially localized image bases and fMRI voxel weights. Based on these estimates, we derived an encoding model that predicted brain activity patterns given stimulus images. Our model outperformed the fixed-basis and voxel RF models. We also derived a decoding model that succeeded in reconstructing visual images from brain activity patterns, though its performance was lower than that of the fixed-basis model. Sparseness priors for the image bases and voxel weights facilitated the selection of spatially localized, sparse pixels as image bases, and contributed to improving the reconstruction accuracy.

Bayesian CCA estimated spatially localized image bases around the foveal region. However, a small number of image bases were estimated outside the foveal region (see Figure 2A). This may be due to the small cortical magnification factor of the visual cortex for the peripheral visual field (Tootell, Silverman, Switkes, & De Valois, 1982; Engel et al., 1997). Since our model uses sparseness priors under the assumption that a small number of voxels are associated with a small number of pixels, the model may fail to relate a small number of peripheral voxels to a large number of pixels in the peripheral visual field. Elaborate adjustment of the degree of sparseness based on the eccentricity and the cortical magnification factor may help to improve basis estimation outside the foveal region.

The encoding model derived from Bayesian CCA demonstrated a higher identification performance than the fixed-basis and voxel RF models. The superior performance of the Bayesian CCA model over the fixed-basis model indicates that the data-driven estimation of image bases does indeed contribute to the prediction of brain activity. The much poorer performance of the voxel RF model may be due to its lack of ability to exploit correlation structures among voxels. fMRI voxels are generally highly correlated, and the correlation can carry relevant information about stimuli or tasks, even in the absence of information in individual voxels (Yamashita et al., 2008). Both the Bayesian CCA and fixed-basis models exploit voxel correlations to estimate weight parameters (voxel correlations are taken into account in the iterative calculation of the trial distributions and in equations 2.27 to 2.33), thus producing predictions that reflect voxel correlations. In contrast, the voxel RF model makes predictions in a voxel-by-voxel manner, ignoring any correlation among voxels. This difference may account for the poor performance of the voxel RF model.

Whereas the encoding model derived from Bayesian CCA was more accurate than the fixed-basis encoding model, the decoding model did not outperform the previously proposed fixed-basis decoding model (Miyawaki et al., 2008; see also Figure 4). As noted above, one reason for the inferior performance with Bayesian CCA may be that fewer image bases were estimated outside the foveal region (see Figure 2A), while the fixed-basis decoding model enforced image bases that covered the whole region. The lack of image bases outside the foveal region is likely to have a more profound impact on the decoding performance than on the encoding performance. The calculation of reconstruction error in the decoding model treated all pixels equally, whereas the evaluation of the encoding model using predicted fMRI activity patterns should depend more heavily on foveal pixels, which have a larger cortical representation than peripheral pixels due to the cortical magnification factor. These issues may underlie the discrepancy in performance between the encoding and decoding models.

It is known that fMRI voxels for foveal representation often suffer from signal dropout because of the proximity to the superior sagittal sinus (Dagli, Ingeholm, & Haxby, 1999). We indeed found lower signal-to-noise ratios (SNRs) in foveal voxels using data from the retinotopy experiment (see Figure 6; SNR was calculated as the ratio of the power at the frequency of the periodically moving stimuli to the power at other frequencies). The high level of accuracy for foveal pixels, despite the low SNR in retinotopic voxels, suggests that the pattern of many voxels, including those extending beyond the conventional retinotopy, contributed to the prediction of this region.

Figure 6:

Signal-to-noise ratio (SNR) and reconstruction error. Using the data from the retinotopy experiment, we assigned a retinotopic location to each voxel and calculated the SNR of individual voxels. We then grouped the voxels according to the retinotopic eccentricity, and the inverse SNR was plotted with the reconstruction error of the Bayesian CCA model for the pixels at each eccentricity (see Figure 4B) (error bar, 95% confidence interval; two subjects pooled).

Figure 6:

Signal-to-noise ratio (SNR) and reconstruction error. Using the data from the retinotopy experiment, we assigned a retinotopic location to each voxel and calculated the SNR of individual voxels. We then grouped the voxels according to the retinotopic eccentricity, and the inverse SNR was plotted with the reconstruction error of the Bayesian CCA model for the pixels at each eccentricity (see Figure 4B) (error bar, 95% confidence interval; two subjects pooled).

It should also be noted that whereas Bayesian CCA could exploit the correlation structure of input variables, stimulus images used for model training consisted of spatially uncorrelated random patterns. Therefore, the encoding and decoding models did not take full advantage of the spatial correlation structure inherent in natural visual images (Olshausen & Field, 1996; Bell & Sejnowski, 1997). Using images with natural correlations, Bayesian CCA may be able to extract image bases and voxel weights reflecting natural image statistics. It remains to be seen in future work how they differ from those derived from random images.

A similar method, known as sparse CCA, has been proposed in the field of biostatistics. This has been applied to find a relationship between gene expression and DNA copy number (Witten, Tibshirani, & Hastie, 2009; Parkhomenko, Tritchler, & Beyene, 2009). Sparse CCA has penalized terms (e.g., an L1 constraint) for vectors of weight matrices instead of the sparseness priors assumed in Bayesian CCA. Depending on the penalized term, sparse CCA is expected to produce similar results to Bayesian CCA. Thus, sparse CCA may be used to estimate spatially localized image bases and voxel weights and to derive encoding and decoding models.

The sparseness priors for image bases and voxel weights played an important role in estimating spatially localized image bases (see Figure 5A). If either was lacking, the estimated bases were not localized, and the reconstruction performance declined. Thus, the sparseness priors effectively pruned off irrelevant mappings, leaving only relevant mappings between a small number of pixels and a small number of voxels. Although we did not explicitly incorporate knowledge of the retinotopic map, the voxels and pixels were linked in a manner consistent with this map (see Figures 2B and 2C). These results suggest that the sparseness priors not only improve model performance, but may also help to find physiologically meaningful relationships between a visual image and brain activity.

Our approach provides a general procedure for estimating a modular representation of perceptual or behavioral elements. Our previous study (Miyawaki et al., 2008) used multiple predefined image bases that assumed a putative image representation of the early visual cortex. Such a predefined model may work in brain areas for which the underlying neural representation is well known. In contrast, Bayesian CCA allows us to estimate modular representation without explicit assumptions about elemental features. Although our results obtained with visual image data are rather confirmatory, demonstrating the proof of concept for the automatic extraction of functional modules by Bayesian CCA modeling, the model can be applied to many different domains. For instance, when applied to higher visual areas, our approach may uncover elemental representations of visual objects or scenes (Fujita, Tanaka, Ito, & Cheng, 1992; Grill-Spector & Malach, 2004; Yamane, Carlson, Bowman, Wang, & Connor, 2008). Another possible application would be to estimate the relationship between motor behaviors and corresponding brain activity patterns (Poggio & Bizzi, 2004). If motor behavior consists of a combination of “synergies,” putative components of motor control (d’Avella, Saltiel, & Bizzi, 2003; Graziano, 2006), Bayesian CCA may allow specific synergies to be found as modular neural representations underlying a great variety of motor behaviors. Thus, our approach provides a new analysis tool for investigating neural representations across multiple cortical areas and modalities.

Appendix:  Derivation of Variance Parameters

Here, we describe the third step (-step) of the variational Bayesian algorithm: the parameter estimation of the inverse variances . The inverse variances are estimated by maximizing the free energy (see equation 2.22) with respect to the trial distribution , while the trial distributions and are fixed:
formula
A.1
The trial distributions can be factorized into because , , , and are independent. Hence, each inverse variance is estimated separately. The trial distribution for is given by
formula
A.2
formula
A.3
formula
A.4
where represents the gamma distribution with mean and confidence parameter . The trial distribution for is given by
formula
A.5
formula
A.6
formula
A.7
The trial distributions for and are given by
formula
A.8
formula
A.9
formula
A.10
and
formula
A.11
formula
A.12
formula
A.13

Acknowledgments

We thank M. Takemiya, O. Yamashita, and T. Shimokawa for helpful comments on the manuscript. This study was supported by the Nissan Science Foundation, SCOPE (SOUMU), and SRPBS (MEXT).

References

Anderson
,
T. W.
(
2003
).
An introduction to multivariate statistical analysis
(
3rd ed.
).
New York
:
Wiley
.
Attias
,
H.
(
1999
).
Inferring parameters and structure of latent variable models by variational Bayes
. In
Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence
(pp.
21
30
).
San Francisco
:
Morgan Kaufmann
.
Bach
,
F. R.
, &
Jordan
,
M. I.
(
2005
).
A probabilistic interpretation of canonical correlation analysis
(
Tech. Rep. 688
).
Berkeley
:
Department of Statistics, University of California
,
Berkeley
.
Bell
,
A. J.
, &
Sejnowski
,
T. J.
(
1997
).
The “independent components” of natural scenes are edge filters
.
Vision Research
,
37
,
3327
3338
.
Bishop
,
C. M.
(
2006
).
Pattern recognition and machine learning
.
New York
:
Springer
.
Dagli
,
M. S.
,
Ingeholm
,
J. E.
, &
Haxby
,
J. V.
(
1999
).
Localization of cardiac-induced signal change in fMRI
.
NeuroImage
,
9
,
407
415
.
d’Avella
,
A.
,
Saltiel
,
P.
, &
Bizzi
,
E.
(
2003
).
Combinations of muscle synergies in the construction of a natural motor behavior
.
Nature Neuroscience
,
6
,
300
308
.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2001
).
Theoretical neuroscience
.
Cambridge, MA
:
MIT Press
.
Dumoulin
,
S. O.
, &
Wandell
,
B. A.
(
2008
).
Population receptive field estimates in human visual cortex
.
NeuroImage
,
39
,
647
660
.
Engel
,
S. A.
,
Glover
,
G. H.
, &
Wandell
,
B. A.
(
1997
).
Retinotopic organization in human visual cortex and the spatial precision of functional MRI
.
Cerebral Cortex
,
7
,
181
192
.
Fujita
,
I.
,
Tanaka
,
K.
,
Ito
,
M.
, &
Cheng
,
K.
(
1992
).
Columns for visual features of objects in inferotemporal cortex
.
Nature
,
360
,
343
346
.
Fujiwara
,
Y.
,
Miyawaki
,
Y.
, &
Kamitani
,
Y.
(
2009
).
Estimating image bases for visual image reconstruction from human brain activity
. In
Y. Bengio, D. Schuurmans, J. Lafferty, C.K.I. Williams, & A. Culotta (Eds.)
,
Advances in neural information processing systems
,
22
(pp.
576
578
).
Red Hook, NY
:
Curran
.
Graziano
,
M.
(
2006
).
The organization of behavioral repertoire in motor cortex
.
Annual Review of Neuroscience
,
29
,
105
134
.
Grill-Spector
,
K.
, &
Malach
,
R.
(
2004
).
The human visual cortex
.
Annual Review of Neuroscience
,
27
,
649
677
.
Hubel
,
D. H.
, &
Wiesel
,
T. N.
(
1974
).
Uniformity of monkey striate cortex: A parallel relationship between field size scatter, and magnification factor
.
Journal of Comparative Neurology
,
158
,
295
306
.
Kamitani
,
Y.
, &
Tong
,
F.
(
2005
).
Decoding the visual and subjective contents of the human brain
.
Nature Neuroscience
,
8
,
679
685
.
Kamitani
,
Y.
, &
Tong
,
F.
(
2006
).
Decoding seen and attended motion directions from activity in the human visual cortex
.
Current Biology
,
16
,
1096
1102
.
Kay
,
K. N.
,
Naselaris
,
T.
,
Prenger
,
R. J.
, &
Gallant
,
J. L.
(
2008
).
Identifying natural images from human brain activity
.
Nature
,
452
,
352
355
.
MacKay
,
D.J.C.
(
1994
).
Bayesian methods for backprop networks
.
Models of Neural Networks
,
3
,
211
254
.
Mitchell
,
T. M.
,
Shinkareva
,
S. V.
,
Carlson
,
A.
,
Chang
,
K. M.
,
Malave
,
V. L.
,
Mason
,
R. A.
, et al.
(
2008
).
Predicting human brain activity associated with the meanings of nouns
.
Science
,
320
,
1191
1195
.
Miyawaki
,
Y.
,
Uchida
,
H.
,
Yamashita
,
O.
,
Sato
,
M. A.
,
Morito
,
Y.
,
Tanabe
,
H. C.
, et al.
(
2008
).
Visual image reconstruction from human brain activity using a combination of multiscale local image decoders
.
Neuron
,
60
,
915
929
.
Naselaris
,
T.
,
Kay
,
K. N.
,
Nishimoto
,
S.
, &
Gallant
,
J. L.
(
2011
).
Encoding and decoding in fMRI
.
NeuroImage
,
56
,
400
410
.
Neal
,
R. M.
(
1996
).
Bayesian learning for neural networks
.
New York
:
Springer-Verlag
.
Olshausen
,
B. A.
, &
Field
,
D. J.
(
1996
).
Emergence of simple-cell receptive field properties by learning a sparse code for natural images
.
Nature
,
381
,
607
609
.
Parkhomenko
,
E.
,
Tritchler
,
D.
, &
Beyene
,
J.
(
2009
).
Sparse canonical correlation analysis with application to genomic data integration
.
Statistical Applications in Genetics and Molecular Biology
,
8
,
1
34
.
Pereira
,
F.
,
Mitchell
,
T.
, &
Botvinick
,
M.
(
2009
).
Machine learning classifiers and fMRI: A tutorial overview
.
NeuroImage
,
45
,
S199
S209
.
Poggio
,
T.
, &
Bizzi
,
E.
(
2004
).
Generalization in vision and motor control
.
Nature
,
431
,
768
774
.
Sereno
,
M. I.
,
Dale
,
A. M.
,
Reppas
,
J. B.
,
Kwong
,
K. K.
,
Belliveau
,
J. W.
, &
Brady
,
T. J.
(
1995
).
Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging
.
Science
,
268
,
889
893
.
Smith
,
A. T.
,
Singh
,
K. D.
,
Williams
,
A. L.
, &
Greenlee
,
M. W.
(
2001
).
Estimating receptive field size from fMRI data in human striate and extrastriate visual cortex
.
Cerebral Cortex
,
11
,
1182
1190
.
Tipping
,
M. E.
(
2001
).
Sparse Bayesian learning and the relevance vector machine
.
Journal of Machine Learning Research
,
1
,
211
244
.
Tootell
,
R. B.
,
Silverman
,
M. S.
,
Switkes
,
E.
, &
De Valois
,
R. L.
(
1982
).
Deoxyglucose analysis of retinotopic organization in primate striate cortex
.
Science
,
218
,
902
904
.
Wang
,
C.
(
2007
).
Variational Bayesian approach to canonical correlation analysis
.
IEEE Transactions on Neural Networks
,
18
,
905
910
.
Witten
,
D. M.
,
Tibshirani
,
R.
, &
Hastie
,
T.
(
2009
).
A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis
.
Biostatistics
,
10
,
515
534
.
Yamane
,
Y.
,
Carlson
,
E. T.
,
Bowman
,
K. C.
,
Wang
,
Z.
, &
Connor
,
C. E.
(
2008
).
A neural code for three-dimensional object shape in macaque inferotemporal cortex
.
Nature Neuroscience
,
11
,
1352
1360
.
Yamashita
,
O.
,
Sato
,
M. A.
,
Yoshioka
,
T.
,
Tong
,
F.
, &
Kamitani
,
Y.
(
2008
).
Sparse estimation automatically selects voxels relevant for the decoding of fMRI activity patterns
.
NeuroImage
,
42
,
1414
1429
.

Note