Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-5 of 5
Issei Sato
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (3): 781–803.
Published: 17 February 2022
FIGURES
Abstract
View article
PDF
Noisy pairwise comparison feedback has been incorporated to improve the overall query complexity of interactively learning binary classifiers. The positivity comparison oracle is extensively used to provide feedback on which is more likely to be positive in a pair of data points. Because it is impossible to determine accurate labels using this oracle alone without knowing the classification threshold, existing methods still rely on the traditional explicit labeling oracle, which explicitly answers the label given a data point. The current method conducts sorting on all data points and uses explicit labeling oracle to find the classification threshold. However, it has two drawbacks: (1) it needs unnecessary sorting for label inference and (2) it naively adapts quick sort to noisy feedback. In order to avoid these inefficiencies and acquire information of the classification threshold at the same time, we propose a new pairwise comparison oracle concerning uncertainties. This oracle answers which one has higher uncertainty given a pair of data points. We then propose an efficient adaptive labeling algorithm to take advantage of the proposed oracle. In addition, we address the situation where the labeling budget is insufficient compared to the data set size. Furthermore, we confirm the feasibility of the proposed oracle and the performance of the proposed algorithm theoretically and empirically.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (12): 3361–3412.
Published: 12 November 2021
Abstract
View article
PDF
Ordinal regression is aimed at predicting an ordinal class label. In this letter, we consider its semisupervised formulation, in which we have unlabeled data along with ordinal-labeled data to train an ordinal regressor. There are several metrics to evaluate the performance of ordinal regression, such as the mean absolute error, mean zero-one error, and mean squared error. However, the existing studies do not take the evaluation metric into account, restrict model choice, and have no theoretical guarantee. To overcome these problems, we propose a novel generic framework for semisupervised ordinal regression based on the empirical risk minimization principle that is applicable to optimizing all of the metrics mentioned above. In addition, our framework has flexible choices of models, surrogate losses, and optimization algorithms without the common geometric assumption on unlabeled data such as the cluster assumption or manifold assumption. We provide an estimation error bound to show that our risk estimator is consistent. Finally, we conduct experiments to show the usefulness of our framework.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (8): 2163–2192.
Published: 26 July 2021
FIGURES
Abstract
View article
PDF
Deep learning is often criticized by two serious issues that rarely exist in natural nervous systems: overfitting and catastrophic forgetting. It can even memorize randomly labeled data, which has little knowledge behind the instance-label pairs. When a deep network continually learns over time by accommodating new tasks, it usually quickly overwrites the knowledge learned from previous tasks. Referred to as the neural variability , it is well known in neuroscience that human brain reactions exhibit substantial variability even in response to the same stimulus. This mechanism balances accuracy and plasticity/flexibility in the motor learning of natural nervous systems. Thus, it motivates us to design a similar mechanism, named artificial neural variability (ANV), that helps artificial neural networks learn some advantages from “natural” neural networks. We rigorously prove that ANV plays as an implicit regularizer of the mutual information between the training data and the learned model. This result theoretically guarantees ANV a strictly improved generalizability, robustness to label noise, and robustness to catastrophic forgetting. We then devise a neural variable risk minimization (NVRM) framework and neural variable optimizers to achieve ANV for conventional network architectures in practice. The empirical studies demonstrate that NVRM can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (5): 1234–1268.
Published: 13 April 2021
FIGURES
Abstract
View article
PDF
Pairwise similarities and dissimilarities between data points are often obtained more easily than full labels of data in real-world classification problems. To make use of such pairwise information, an empirical risk minimization approach has been proposed, where an unbiased estimator of the classification risk is computed from only pairwise similarities and unlabeled data. However, this approach has not yet been able to handle pairwise dissimilarities. Semisupervised clustering methods can incorporate both similarities and dissimilarities into their framework; however, they typically require strong geometrical assumptions on the data distribution such as the manifold assumption, which may cause severe performance deterioration. In this letter, we derive an unbiased estimator of the classification risk based on all of similarities and dissimilarities and unlabeled data. We theoretically establish an estimation error bound and experimentally demonstrate the practical usefulness of our empirical risk minimization method.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2020) 32 (3): 659–681.
Published: 01 March 2020
FIGURES
Abstract
View article
PDF
Learning from triplet comparison data has been extensively studied in the context of metric learning, where we want to learn a distance metric between two instances, and ordinal embedding, where we want to learn an embedding in a Euclidean space of the given instances that preserve the comparison order as much as possible. Unlike fully labeled data, triplet comparison data can be collected in a more accurate and human-friendly way. Although learning from triplet comparison data has been considered in many applications, an important fundamental question of whether we can learn a classifier only from triplet comparison data without all the labels has remained unanswered. In this letter, we give a positive answer to this important question by proposing an unbiased estimator for the classification risk under the empirical risk minimization framework. Since the proposed method is based on the empirical risk minimization framework, it inherently has the advantage that any surrogate loss function and any model, including neural networks, can be easily applied. Furthermore, we theoretically establish an estimation error bound for the proposed empirical risk minimizer. Finally, we provide experimental results to show that our method empirically works well and outperforms various baseline methods.