Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-5 of 5
Anand Rangarajan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2024) 36 (5): 858–896.
Published: 23 April 2024
FIGURES
| View All (18)
Abstract
View articletitled, Toward Improving the Generation Quality of Autoregressive Slot VAEs
View
PDF
for article titled, Toward Improving the Generation Quality of Autoregressive Slot VAEs
Unconditional scene inference and generation are challenging to learn jointly with a single compositional model. Despite encouraging progress on models that extract object-centric representations (“slots”) from images, unconditional generation of scenes from slots has received less attention. This is primarily because learning the multiobject relations necessary to imagine coherent scenes is difficult. We hypothesize that most existing slot-based models have a limited ability to learn object correlations. We propose two improvements that strengthen object correlation learning. The first is to condition the slots on a global, scene-level variable that captures higher-order correlations between slots. Second, we address the fundamental lack of a canonical order for objects in images by proposing to learn a consistent order to use for the autoregressive generation of scene objects. Specifically, we train an autoregressive slot prior to sequentially generate scene objects following a learned order. Ordered slot inference entails first estimating a randomly ordered set of slots using existing approaches for extracting slots from images, then aligning those slots to ordered slots generated autoregressively with the slot prior. Our experiments across three multiobject environments demonstrate clear gains in unconditional scene generation quality. Detailed ablation studies are also provided that validate the two proposed improvements.
Journal Articles
The Concave-Convex Procedure
UnavailablePublisher: Journals Gateway
Neural Computation (2003) 15 (4): 915–936.
Published: 01 April 2003
Abstract
View articletitled, The Concave-Convex Procedure
View
PDF
for article titled, The Concave-Convex Procedure
The concave-convex procedure (CCCP) is a way to construct discrete-time iterative dynamical systems that are guaranteed to decrease global optimization and energy functions monotonically. This procedure can be applied to almost any optimization problem, and many existing algorithms can be interpreted in terms of it. In particular, we prove that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many existing neural network and mean-field theory algorithms are also examples of CCCP. The generalized iterative scaling algorithm and Sinkhorn's algorithm can also be expressed as CCCP by changing variables. CCCP can be used both as a new way to understand, and prove the convergence of, existing optimization algorithms and as a procedure for generating new algorithms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (6): 1455–1474.
Published: 15 August 1999
Abstract
View articletitled, Convergence Properties of the Softassign Quadratic Assignment Algorithm
View
PDF
for article titled, Convergence Properties of the Softassign Quadratic Assignment Algorithm
The softassign quadratic assignment algorithm is a discrete-time, continuous-state, synchronous updating optimizing neural network. While its effectiveness has been shown in the traveling salesman problem, graph matching, and graph partitioning in thousands of simulations, its convergence properties have not been studied. Here, we construct discrete-time Lyapunov functions for the cases of exact and approximate doubly stochastic constraint satisfaction, which show convergence to a fixed point. The combination of good convergence properties and experimental success makes the softassign algorithm an excellent choice for neural quadratic assignment optimization.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (5): 1041–1060.
Published: 01 July 1996
Abstract
View articletitled, A Novel Optimizing Network Architecture with Applications
View
PDF
for article titled, A Novel Optimizing Network Architecture with Applications
We present a novel optimizing network architecture with applications in vision, learning, pattern recognition, and combinatorial optimization. This architecture is constructed by combining the following techniques: (1) deterministic annealing, (2) self-amplification, (3) algebraic transformations, (4) clocked objectives, and (5) softassign. Deterministic annealing in conjunction with self-amplification avoids poor local minima and ensures that a vertex of the hypercube is reached. Algebraic transformations and clocked objectives help partition the relaxation into distinct phases. The problems considered have doubly stochastic matrix constraints or minor variations thereof. We introduce a new technique, softassign, which is used to satisfy this constraint. Experimental results on different problems are presented and discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (4): 787–804.
Published: 01 May 1996
Abstract
View articletitled, Learning with Preknowledge: Clustering with Point and Graph Matching Distance Measures
View
PDF
for article titled, Learning with Preknowledge: Clustering with Point and Graph Matching Distance Measures
Prior knowledge constraints are imposed upon a learning problem in the form of distance measures. Prototypical 2D point sets and graphs are learned by clustering with point-matching and graph-matching distance measures. The point-matching distance measure is approximately invariant under affine transformations—translation, rotation, scale, and shear—and permutations. It operates between noisy images with missing and spurious points. The graph-matching distance measure operates on weighted graphs and is invariant under permutations. Learning is formulated as an optimization problem. Large objectives so formulated (∼ million variables) are efficiently minimized using a combination of optimization techniques—softassign, algebraic transformations, clocked objectives, and deterministic annealing.