Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-6 of 6
Wolfgang Banzhaf
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life180-187, (July 13–18, 2020) 10.1162/isal_a_00336
Abstract
View Paper
PDF
We analyze an artificial economy model designed to handle severe non-equilibrium situations. This agent-based model is intended to allow innovation in the form of new technologies, producers and consumers entering (and leaving) the system. Here we examine a disruption of consumption patterns akin to the economic crisis brought about in the real economy through the corona virus and the following Covid-19 pandemic.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life388-395, (July 29–August 2, 2019) 10.1162/isal_a_00191
Abstract
View Paper
PDF
We evolve floating point Sextic polynomial populations of genetic programming binary trees for up to a million generations. Programs with almost 400 000 000 instructions are created by crossover. To support unbounded Long-Term Evolution Experiment LTEE GP we use both SIMD parallel AVX 512 bit instructions and 48 threads to yield performance of up to 149 billion GP operations per second, 149 giga GPops, on a single Intel Xeon Gold 6126 2.60 GHz server.
Proceedings Papers
. alife2018, ALIFE 2018: The 2018 Conference on Artificial Life436-443, (July 23–27, 2018) 10.1162/isal_a_00082
Abstract
View Paper
PDF
How to facilitate the evolution of cooperation is a key question in multi-agent systems and game-theoretical situations. Individual reinforcement learners often fail to learn coordinated behavior. Using an evolutionary approach for selection can produce optimal behavior but may require significant computational efforts. Social imitation of behavior causes weak coordination in a society. Our goal in this paper is to improve the behavior of agents with reduced computational effort by combining evolutionary techniques, collective learning, and social imitation techniques. We designed a genetic algorithm based cooperation framework equipped with these techniques in order to solve particular coordination games in complex multi-agent networks. In this framework, offspring agents inherit more successful behavior selected from gameplaying parent agents, and all agents in the network improve their performance through collective reinforcement learning and social imitation. Experiments are carried out to test the proposed framework and compare the performance with previous work. Experimental results show that the framework is more effective for the evolution of cooperation in complex multi-agent social systems than either evolutionary, reinforcement learning or imitation system on their own.
Proceedings Papers
. ecal2013, ECAL 2013: The Twelfth European Conference on Artificial Life981-988, (September 2–6, 2013) 10.1162/978-0-262-31709-2-ch146
Proceedings Papers
. ecal2011, ECAL 2011: The 11th European Conference on Artificial Life132, (August 8–12, 2011) 10.7551/978-0-262-29714-1-ch132
Proceedings Papers
. ecal2011, ECAL 2011: The 11th European Conference on Artificial Life51, (August 8–12, 2011) 10.7551/978-0-262-29714-1-ch051