Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Tom Bylander
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (1995) 7 (2): 370–379.
Published: 01 March 1995
Abstract
View article
PDF
We demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions using perceptrons. The central result is as follows. Suppose there exists a vector w * , of n weights (including the threshold) with “accuracy” 1 − α, “average error” η, and “balancing separation” σ, i.e., with probability 1 − α, w * correctly classifies an example x; over examples incorrectly classified by w * , the expected value of |w * · x| is η (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of |w * · x| is σ. Then, with probability 1 − δ, the perceptron achieves accuracy at least 1 − [∊ + α(1 + η/σ)] after O [ n ∊ −2 σ −2 (ln 1/δ)] examples.