The perceptron membrane is a new connectionist model that aims at solving discrimination (classification) problems with piecewise linear surfaces. The discrimination surfaces of perceptron membranes are defined by the union of convex polyhedrons. Starting from only one convex polyhedron, new facets and new polyhedrons are added during learning. Moreover, the positions and orientations of the facets are continuously adapted according to the training examples. Considering each facet as a perceptron cell, a geometric credit assignment provides a local training domain to each perceptron of the network. This enables one to apply statistical theorems on the probability of good generalization for each unit on its learning domain, and gives a reliable criterion for perceptron elimination (using Vapnik-Chervonenkis dimension). Furthermore, a regularization procedure is implemented. The model efficiency is demonstrated on well-known problems such as the 2-spirals or waveforms.