Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-3 of 3
Ravi Kothari
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (7): 1525–1544.
Published: 01 July 2004
Abstract
View article
PDF
In general, pattern classification algorithms assume that all the features are available during the construction of a classifier and its subsequent use. In many practical situations, data are recorded in different servers that are geographically apart, and each server observes features of local interest. The underlying infrastructure and other logistics (such as access control) in many cases do not permit continual synchronization. Each server thus has a partial view of the data in the sense that feature subsets (not necessarily disjoint) are available at each server. In this article, we present a classification algorithm for this distributed vertically partitioned data. We assume that local classifiers can be constructed based on the local partial views of the data available at each server. These local classifiers can be any one of the many standard classifiers (e.g., neuralnetworks, decision tree, k nearest neighbor). Often these local classifiers are constructed to support decision making at each location, and our focus is not on these individual local classifiers. Rather, our focus is constructing a classifier that can use these local classifiers to achieve an error rate that is as close as possible to that of a classifier having access to the entire feature set. We empirically demonstrate the efficacy of the proposed algorithm and also provide theoretical results quantifying the loss that results as compared to the situation where the entire feature set is available to any single classifier.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (1): 59–65.
Published: 01 January 1998
Abstract
View article
PDF
In this article we study the effect of dynamically modifying the weight matrix on the performance of a neural associative memory. The dynamic modification is implemented by adding, at each step, the outer product of the current state, scaled by a suitable constant η, to the correlation weight matrix. For single-shot synchronous dynamics, we analytically obtain the optimal value of η. Although knowledge of the noise percentage is required for calculating the optimal value of η, a fairly good choice of η can be made even when the amount of noise is not known. Experimental results are provided in support of the analysis. The efficacy of the proposed modification is also experimentally verified for the case of asynchronous updating with transient length > 1.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (6): 1381–1402.
Published: 15 August 1997
Abstract
View article
PDF
We investigate the effects of including selected lateral interconnections in a feedforward neural network. In a network with one hidden layer consisting of m hidden neurons labeled 1,2… m , hidden neuron j is connected fully to the inputs, the outputs, and hidden neuron j + 1. As a consequence of the lateral connections, each hidden neuron receives two error signals: one from the output layer and one through the lateral interconnection. We show that the use of these lateral interconnections among the hidden-layer neurons facilitates controlled assignment of role and specialization of the hidden-layer neurons. In particular, we show that as training progresses, hidden neurons become progressively specialized—starting from the fringes (i.e., lower and higher numbered hidden neurons, e.g., 1, 2, m — 1 m) and leaving the neurons in the center of the hidden layer (i.e., hidden-layer neurons numbered close to m/2 ) unspecialized or functionally identical. Consequently, the network behaves like network growing algorithms without the explicit need to add hidden units, and like soft weight sharing due to functionally identical neurons in the center of the hidden layer. Experimental results from one classification and one function approximation problems are presented to illustrate selective specialization of the hidden-layer neurons. In addition, the improved generalization that results from a decrease in the effective number of free parameters is illustrated through a simple function approximation example and with a real-world data set. Besides the reduction in the number of free parameters, the localization of weight sharing may also allow for a method that allows procedural determination for the number of hidden-layer neurons required for a given learning task.