Traditional machine vision assumes that the vision system recovers a complete, labeled description of the world . Recently, several researchers have criticized this model and proposed an alternative model that considers perception as a distributed collection of task-specific, context-driven visual routines [1, 12]. Some of these researchers have argued that in natural living systems these visual routines are the product of natural selection . So far, researchers have hand-coded task-specific visual routines for actual implementations (e.g., ). In this article we propose an alternative approach in which visual routines for simple tasks are created using an artificial evolution approach. We present results from a series of runs on actual camera images, in which simple routines were evolved using genetic programming techniques . The results obtained are promising: The evolved routines are able to process correctly up to 93% of the test images, which is better than any algorithm we were able to write by hand.
One category of research in Artificial Life is concerned with modeling and building so-called adaptive autonomous agents, which are systems that inhabit a dynamic, unpredictable environment in which they try to satisfy a set of time-dependent goals or motivations. Agents are said to be adaptive if they improve their competence at dealing with these goals based on experience. Autonomous agents constitute a new approach to the study of Artificial Intelligence (AI), which is highly inspired by biology, in particular ethology, the study of animal behavior. Research in autonomous agents has brought about a new wave of excitement into the field of AI. This paper reflects on the state of the art of this new approach. It attempts to extract its main ideas, evaluates what contributions have been made so far, and identifies its current limitations and open problems.