How to facilitate the evolution of cooperation is a key question in multi-agent systems and game-theoretical situations. Individual reinforcement learners often fail to learn coordinated behavior. Using an evolutionary approach for selection can produce optimal behavior but may require significant computational efforts. Social imitation of behavior causes weak coordination in a society. Our goal in this paper is to improve the behavior of agents with reduced computational effort by combining evolutionary techniques, collective learning, and social imitation techniques. We designed a genetic algorithm based cooperation framework equipped with these techniques in order to solve particular coordination games in complex multi-agent networks. In this framework, offspring agents inherit more successful behavior selected from gameplaying parent agents, and all agents in the network improve their performance through collective reinforcement learning and social imitation. Experiments are carried out to test the proposed framework and compare the performance with previous work. Experimental results show that the framework is more effective for the evolution of cooperation in complex multi-agent social systems than either evolutionary, reinforcement learning or imitation system on their own.

This content is only available as a PDF.