Skip Nav Destination
Close Modal
Update search
NARROW
Format
TocHeadingTitle
Date
Availability
1-4 of 4
Andrea De Lorenzo
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
. isal2023, ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference121, (July 24–28, 2023) 10.1162/isal_a_00585
Abstract
View Paper
PDF
Scientific research plays a crucial role in advancing human civilization, thanks to the efforts of a multitude of individual actors. Their behavior is largely driven by individual incentives, both explicit and implicit. In this paper, we propose and validate a multi-agent model to study the complex system of scholarly publishing and investigate the impact of incentives on research output. We use reinforcement learning to make the behavior of the actors optimizable, and guide their optimization with a reward signal that encodes the incentives. We consider various combinations of incentives and predefined behaviors and analyze their impact on both individual (h-index, impact factor) and overall indexes of research output. Our results suggest that, despite its simplicity, our model is able to capture the main dynamics of the system. Moreover, we find that (a) most incentives tend to favor productivity over quality and (b) incentives related to journal perceived reputation tend to result in waste of research efforts.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life190-198, (July 13–18, 2020) 10.1162/isal_a_00276
Abstract
View Paper
PDF
Adaptation of agents in artificial life scenarios is especially effective when agents may evolve, i.e., inherit traits from their parents, and learn by interacting with the environment. The learning process may be boosted with forms of social learning , i.e., by allowing an agent to learn by combining its experiences with knowledge transferred among agents. In this work, we tackle two specific questions regarding social learning and evolution: (a) from whom learners should learn? (b) how should knowledge be transferred? We address these questions by experimentally investigating two scenarios: a simple one in which the mechanism for evolution and learning is easily interpretable; a more complex and realistic artificial life scenario in which agents compete for survival. Experimental results show that social learning is more profitable when (a) the learners learn from a small set of good teachers and (b) the knowledge to be transferred is determined by teachers experience, rather than learner experience.
Proceedings Papers
. isal2020, ALIFE 2020: The 2020 Conference on Artificial Life570-577, (July 13–18, 2020) 10.1162/isal_a_00248
Abstract
View Paper
PDF
We consider a multi-agent system in which the individual goal is to collect resources, but where the amount of collected resources depends also on others decision. Agents can communicate and can take advantage of being communicated other agents’ plan: therefore they may develop more profitable strategies. We wonder if some kind of collective behaviour, with respect to communication, emerges in this system without being explicitly promoted. To investigate this aspect, we design three different scenarios, respectively a cooperative, a competitive, and a mixed one, in which agents’ behaviors are individually learned by means of reinforcement learning. We consider different strategies concerning communication and learning, including no-communication, always-communication, and optional-communication. Experimental results show that always-communication leads to a collective behaviour with the best results in terms of both overall earned resources and equality between agents. On the other hand optional-communication strategy leads to similar collective strategies in some of these scenarios, but in other scenarios some agents develop individual behaviours that oppose to the collective welfare and thus result in high inequality.
Proceedings Papers
. isal2019, ALIFE 2019: The 2019 Conference on Artificial Life574-581, (July 29–August 2, 2019) 10.1162/isal_a_00223
Abstract
View Paper
PDF
Soft robots allow for interesting morphological and behavioral designs because they exhibit more degrees of freedom than robots composed of rigid parts. In particular, voxel-based soft robots (VSRs) —aggregations of elastic cubic building blocks—have attracted the interest of Robotics and Artificial Life researchers. VSRs can be controlled by changing the volume of individual blocks: simple, yet effective controllers that do not exploit the feedback of the environment, have been automatically designed by means of Evolutionary Algorithms (EAs). In this work we explore the possibility of evolving sensing controllers in the form of artificial neural networks: we hence allow the robot to sense the environment in which it moves. Although the search space for a sensing controller is larger than its non-sensing counterpart, we show that effective sensing controllers can be evolved which realize interesting locomotion behaviors. We also experimentally investigate the impact of the VSR morphology on the effectiveness of the search and verify that the sensing controllers are indeed able to exploit their sensing ability for better solving the locomotion task.