This paper describes REGAL, a distributed genetic algorithm-based system, designed for learning first-order logic concept descriptions from examples. The system is a hybrid of the Pittsburgh and the Michigan approaches, as the population constitutes a redundant set of partial concept descriptions, each evolved separately. In order to increase effectiveness, REGAL is specifically tailored to the concept learning task; hence, REGAL is task-dependent, but, on the other hand, domain-independent. The system proved particularly robust with respect to parameter setting across a variety of different application domains.
REGAL is based on a selection operator, called Universal Suffrage operator, provably allowing the population to asymptotically converge, on the average, to an equilibrium state in which several species coexist. The system is presented in both a serial and a parallel version, and a new distributed computational model is proposed and discussed.
The system has been tested on a simple artificial domain for the sake of illustration, and on several complex real-world and artificial domains in order to show its power and to analyze its behavior under various conditions. The results obtained so far suggest that genetic search may be a valuable alternative to logic-based approaches to learning concepts, when no (or little) a priori knowledge is available and a very large hypothesis space has to be explored.