A simple form of cooperation between the k-nearest neighbors (NN) approach to classification and the neural-like property of adaptation is explored. A tunable, high level k-nearest neighbors decision rule is defined that comprehends most previous generalizations of the common majority rule. A learning procedure is developed that applies to this rule and exploits those statistical features that can be induced from the training set. The overall approach is tested on a problem of handwritten character recognition. Experiments show that adaptivity in the decision rule may improve the recognition and rejection capability of standard k-NN classifiers.