Traditional machine vision assumes that the vision system recovers a complete, labeled description of the world . Recently, several researchers have criticized this model and proposed an alternative model that considers perception as a distributed collection of task-specific, context-driven visual routines [1, 12]. Some of these researchers have argued that in natural living systems these visual routines are the product of natural selection . So far, researchers have hand-coded task-specific visual routines for actual implementations (e.g., ). In this article we propose an alternative approach in which visual routines for simple tasks are created using an artificial evolution approach. We present results from a series of runs on actual camera images, in which simple routines were evolved using genetic programming techniques . The results obtained are promising: The evolved routines are able to process correctly up to 93% of the test images, which is better than any algorithm we were able to write by hand.