Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Qi Zhao
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2019) 31 (3): 555–573.
Published: 01 March 2019
FIGURES
| View All (5)
Abstract
View article
PDF
Redundancy is a fundamental characteristic of many biological processes such as those in the genetic, visual, muscular, and nervous systems, yet its driven mechanism has not been fully comprehended. Until recently, the only understanding of redundancy is as a mean to attain fault tolerance, which is reflected in the design of many man-made systems. On the contrary, our previous work on redundant sensing (RS) has demonstrated an example where redundancy can be engineered solely for enhancing accuracy and precision. The design was inspired by the binocular structure of human vision, which we believe may share a similar operation. In this letter, we present a unified theory describing how such utilization of redundancy is feasible through two complementary mechanisms: representational redundancy (RPR) and entangled redundancy (ETR). We also point out two additional examples where our new understanding of redundancy can be applied to justify a system's superior performance. One is the human musculoskeletal system (HMS), a biological instance, and the other is the deep residual neural network (ResNet), an artificial counterpart. We envision that our theory would provide a framework for the future development of bio-inspired redundant artificial systems, as well as assist studies of the fundamental mechanisms governing various biological processes.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2005) 17 (11): 2482–2507.
Published: 01 November 2005
Abstract
View article
PDF
The goal of semisupervised clustering/mixture modeling is to learn the underlying groups comprising a given data set when there is also some form of instance-level supervision available, usually in the form of labels or pairwise sample constraints. Most prior work with constraints assumes the number of classes is known, with each learned cluster assumed to be a class and, hence, subject to the given class constraints. When the number of classes is unknown or when the one-cluster-per-class assumption is not valid, the use of constraints may actually be deleterious to learning the ground-truth data groups. We address this by (1) allowing allocation of multiple mixture components to individual classes and (2) estimating both the number of components and the number of classes. We also address new class discovery, with components void of constraints treated as putative unknown classes. For both real-world and synthetic data, our method is shown to accurately estimate the number of classes and to give favorable comparison with the recent approach of Shental, Bar-Hillel, Hertz, and Weinshall (2003).