Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Todd P. Coleman
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (4): 613–652.
Published: 01 April 2019
FIGURES
| View All (6)
Abstract
View article
PDF
The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution P to another distribution Q enables the solution to many problems in machine learning (e.g., Bayesian inference, generative modeling) and has been actively pursued from theoretical, computational, and application perspectives across the fields of information theory, computer science, and biology. Performing such transformations in general still leads to computational difficulties, especially in high dimensions. Here, we consider the problem of computing such “measure transport maps” with efficient and parallelizable methods. Under the mild assumptions that P need not be known but can be sampled from and that the density of Q is known up to a proportionality constant, and that Q is log-concave, we provide in this work a convex optimization problem pertaining to relative entropy minimization. We show how an empirical minimization formulation and polynomial chaos map parameterization can allow for learning a transport map between P and Q with distributed and scalable methods. We also leverage findings from nonequilibrium thermodynamics to represent the transport map as a composition of simpler maps, each of which is learned sequentially with a transport cost regularized version of the aforementioned problem formulation. We provide examples of our framework within the context of Bayesian inference for the Boston housing data set and generative modeling for handwritten digit images from the MNIST data set.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (8): 2002–2030.
Published: 01 August 2010
FIGURES
| View All (12)
Abstract
View article
PDF
Point-process models have been shown to be useful in characterizing neural spiking activity as a function of extrinsic and intrinsic factors. Most point-process models of neural activity are parametric, as they are often efficiently computable. However, if the actual point process does not lie in the assumed parametric class of functions, misleading inferences can arise. Nonparametric methods are attractive due to fewer assumptions, but computation in general grows with the size of the data. We propose a computationally efficient method for nonparametric maximum likelihood estimation when the conditional intensity function, which characterizes the point process in its entirety, is assumed to be a Lipschitz continuous function but otherwise arbitrary. We show that by exploiting much structure, the problem becomes efficiently solvable. We next demonstrate a model selection procedure to estimate the Lipshitz parameter from data, akin to the minimum description length principle and demonstrate consistency of our estimator under appropriate assumptions. Finally, we illustrate the effectiveness of our method with simulated neural spiking data, goldfish retinal ganglion neural data, and activity recorded in CA1 hippocampal neurons from an awake behaving rat. For the simulated data set, our method uncovers a more compact representation of the conditional intensity function when it exists. For the goldfish and rat neural data sets, we show that our nonparametric method gives a superior absolute goodness-of-fit measure used for point processes than the most common parametric and splines-based approaches.