Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Jayanta Basak
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2006) 18 (9): 2062–2101.
Published: 01 September 2006
Abstract
View article
PDF
Recently we have shown that decision trees can be trained in the online adaptive (OADT) mode (Basak, 2004), leading to better generalization score. OADTs were bottlenecked by the fact that they are able to handle only two-class classification tasks with a given structure. In this article, we provide an architecture based on OADT, ExOADT, which can handle multiclass classification tasks and is able to perform function approximation. ExOADT is structurally similar to OADT extended with a regression layer. We also show that ExOADT is capable not only of adapting the local decision hyperplanes in the nonterminal nodes but also has the potential of smoothly changing the structure of the tree depending on the data samples. We provide the learning rules based on steepest gradient descent for the new model ExOADT. Experimentally we demonstrate the effectiveness of ExOADT in the pattern classification and function approximation tasks. Finally, we briefly discuss the relationship of ExOADT with other classification models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (9): 1959–1981.
Published: 01 September 2004
Abstract
View article
PDF
Decision trees and neural networks are widely used tools for pattern classification. Decision trees provide highly localized representation, whereas neural networks provide a distributed but compact representation of the decision space. Decision trees cannot be induced in the online mode, and they are not adaptive to changing environment, whereas neural networks are inherently capable of online learning and adpativity. Here we provide a classification scheme called online adaptive decision trees (OADT), which is a tree-structured network like the decision trees and capable of online learning like neural networks. A new objective measure is derived for supervised learning with OADT. Experimental results validate the effectiveness of the proposed classification scheme. Also, with certain real-life data sets, we find that OADT performs better than two widely used models: the hierarchical mixture of experts and multilayer perceptron.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2004) 16 (7): 1525–1544.
Published: 01 July 2004
Abstract
View article
PDF
In general, pattern classification algorithms assume that all the features are available during the construction of a classifier and its subsequent use. In many practical situations, data are recorded in different servers that are geographically apart, and each server observes features of local interest. The underlying infrastructure and other logistics (such as access control) in many cases do not permit continual synchronization. Each server thus has a partial view of the data in the sense that feature subsets (not necessarily disjoint) are available at each server. In this article, we present a classification algorithm for this distributed vertically partitioned data. We assume that local classifiers can be constructed based on the local partial views of the data available at each server. These local classifiers can be any one of the many standard classifiers (e.g., neuralnetworks, decision tree, k nearest neighbor). Often these local classifiers are constructed to support decision making at each location, and our focus is not on these individual local classifiers. Rather, our focus is constructing a classifier that can use these local classifiers to achieve an error rate that is as close as possible to that of a classifier having access to the entire feature set. We empirically demonstrate the efficacy of the proposed algorithm and also provide theoretical results quantifying the loss that results as compared to the situation where the entire feature set is available to any single classifier.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (3): 651–676.
Published: 01 March 2001
Abstract
View article
PDF
A single-layered Hough transform network is proposed that accepts image coordinates of each object pixel as input and produces a set of outputs that indicate the belongingness of the pixel to a particular structure (e.g., a straight line). The network is able to learn adaptively the parametric forms of the linear segments present in the image. It is designed for learning and identification not only of linear segments in two-dimensional images but also the planes and hyperplanes in the higher-dimensional spaces. It provides an efficient representation of visual information embedded in the connection weights. The network not only reduces the large space requirement, as in the case of classical Hough transform, but also represents the parameters with high precision.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1999) 11 (4): 1011–1034.
Published: 15 May 1999
Abstract
View article
PDF
A new, efficient algorithm for blind separation of uniformly distributed sources is proposed. The mixing matrix is assumed to be orthogonal by prewhitening the observed signals. The learning rule adaptively estimates the mixing matrix by conceptually rotating a unit hypercube so that all output signal components are contained within or on the hypercube. Under some ideal constraints, it has been theoretically shown that the algorithm is very similar to an ideal convergent algorithm, which is much faster than the existing convergent algorithms. The algorithm has been generalized to take care of the noisy signals by adaptively dilating the hypercube in conjunction with its rotation.