Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Sidney R. Lehky
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (2): 281–329.
Published: 01 February 2020
FIGURES
| View All (18)
Abstract
View article
PDF
Neurons selective for faces exist in humans and monkeys. However, characteristics of face cell receptive fields are poorly understood. In this theoretical study, we explore the effects of complexity, defined as algorithmic information (Kolmogorov complexity) and logical depth, on possible ways that face cells may be organized. We use tensor decompositions to decompose faces into a set of components, called tensorfaces, and their associated weights, which can be interpreted as model face cells and their firing rates. These tensorfaces form a high-dimensional representation space in which each tensorface forms an axis of the space. A distinctive feature of the decomposition algorithm is the ability to specify tensorface complexity. We found that low-complexity tensorfaces have blob-like appearances crudely approximating faces, while high-complexity tensorfaces appear clearly face-like. Low-complexity tensorfaces require a larger population to reach a criterion face reconstruction error than medium- or high-complexity tensorfaces, and thus are inefficient by that criterion. Low-complexity tensorfaces, however, generalize better when representing statistically novel faces, which are faces falling beyond the distribution of face description parameters found in the tensorface training set. The degree to which face representations are parts based or global forms a continuum as a function of tensorface complexity, with low and medium tensorfaces being more parts based. Given the computational load imposed in creating high-complexity face cells (in the form of algorithmic information and logical depth) and in the absence of a compelling advantage to using high-complexity cells, we suggest face representations consist of a mixture of low- and medium-complexity face cells.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (10): 2135–2162.
Published: 01 October 2014
FIGURES
| View All (9)
Abstract
View article
PDF
We have calculated the intrinsic dimensionality of visual object representations in anterior inferotemporal (AIT) cortex, based on responses of a large sample of cells stimulated with photographs of diverse objects. Because dimensionality was dependent on data set size, we determined asymptotic dimensionality as both the number of neurons and number of stimulus image approached infinity. Our final dimensionality estimate was 93 (SD: 11), indicating that there is basis set of approximately 100 independent features that characterize the dimensions of neural object space. We believe this is the first estimate of the dimensionality of neural visual representations based on single-cell neurophysiological data. The dimensionality of AIT object representations was much lower than the dimensionality of the stimuli. We suggest that there may be a gradual reduction in the dimensionality of object representations in neural populations going from retina to inferotemporal cortex as receptive fields become increasingly complex.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (9): 2235–2264.
Published: 01 September 2013
FIGURES
| View All (18)
Abstract
View article
PDF
Current population coding methods, including weighted averaging and Bayesian estimation, are based on extrinsic representations. These require that neurons be labeled with response parameters, such as tuning curve peaks or noise distributions, which are tied to some external, world-based metric scale. Firing rates alone, without this external labeling, are insufficient to represent a variable. However, the extrinsic approach does not explain how such neural labeling is implemented. A radically different and perhaps more physiological approach is based on intrinsic representations, which have access only to firing rates. Because neurons are unlabeled, intrinsic coding represents relative, rather than absolute, values of a variable. We show that intrinsic coding has representational advantages, including invariance, categorization, and discrimination, and in certain situations it may also recover absolute stimulus values.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (5): 1245–1271.
Published: 01 May 2010
FIGURES
| View All (10)
Abstract
View article
PDF
The temporal waveform of neural activity is commonly estimated by low-pass filtering spike train data through convolution with a gaussian kernel. However, the criteria for selecting the gaussian width σ are not well understood. Given an ensemble of Poisson spike trains generated by an instantaneous firing rate function λ ( t ), the problem was to recover an optimal estimate of λ ( t ) by gaussian filtering. We provide equations describing the optimal value of σ using an error minimization criterion and examine how the optimal σ varies within a parameter space defining the statistics of inhomogeneous Poisson spike trains. The process was studied both analytically and through simulations. The rate functions λ ( t ) were randomly generated, with the three parameters defining spike statistics being the mean of λ ( t ), the variance of λ ( t ), and the exponent α of the Fourier amplitude spectrum 1/ f α of λ ( t ). The value of σ opt followed a power law as a function of the pooled mean interspike interval I , σ opt = aI b , where a was inversely related to the coefficient of variation C V of λ ( t ), and b was inversely related to the Fourier spectrum exponent α . Besides applications for data analysis, optimal recovery of an analog signal waveform λ ( t ) from spike trains may also be useful in understanding neural signal processing in vivo.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (7): 1325–1343.
Published: 01 July 2004
Abstract
View article
PDF
A Bayesian method is developed for estimating neural responses to stimuli, using likelihood functions incorporating the assumption that spike trains follow either pure Poisson statistics or Poisson statistics with a refractory period. The Bayesian and standard estimates of the mean and variance of responses are similar and asymptotically converge as the size of the data sample increases. However, the Bayesian estimate of the variance of the variance is much lower. This allows the Bayesian method to provide more precise interval estimates of responses. Sensitivity of the Bayesian method to the Poisson assumption was tested by conducting simulations perturbing the Poisson spike trains with noise. This did not affect Bayesian estimates of mean and variance to a significant degree, indicating that the Bayesian method is robust. The Bayesian estimates were less affected by the presence of noise than estimates provided by the standard method.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (6): 1261–1280.
Published: 15 August 1999
Abstract
View article
PDF
When the nervous system is presented with multiple simultaneous inputs of some variable, such as wavelength or disparity, they can be combined to give rise to qualitatively new percepts that cannot be produced by any single input value. For example, there is no single wavelength that appears white. Many models of decoding neural population codes have problems handling multiple inputs, either attempting to extract a single value of the input parameter or, in some cases, registering the presence of multiple inputs without synthesizing them into something new. These examples raise a more general issue regarding the interpretation of population codes. We propose that population decoding involves not the extraction of specific values of the physical inputs, but rather a transformation from the input space to some abstract representational space that is not simply related to physical parameters. As a specific example, a four-layer network is presented that implements a transformation from wavelength to a high-level hue-saturation color space.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (1): 44–53.
Published: 01 March 1991
Abstract
View article
PDF
It is proposed that inputs to binocular cells are gated by reciprocal inhibition between neurons located either in the lateral geniculate nucleus or in layer 4 of striate cortex. The strength of inhibitory coupling in the gating circuitry is modulated by layer 6 neurons, which are the outputs of binocular matching circuitry. If binocular inputs are matched, the inhibition is modulated to be weak, leading to fused vision, whereas if the binocular inputs are unmatched, inhibition is modulated to be strong, leading to rivalrous oscillations. These proposals are buttressed by psychophysical experiments measuring the strength of adaptational aftereffects following exposure to an adapting stimulus visible only intermittently during binocular rivalry.