Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-6 of 6
Marco Dorigo
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Artificial Life (2014) 20 (3): 291–317.
Published: 01 July 2014
FIGURES
| View All (11)
Abstract
View article
PDF
We study task partitioning in the context of swarm robotics. Task partitioning is the decomposition of a task into subtasks that can be tackled by different workers. We focus on the case in which a task is partitioned into a sequence of subtasks that must be executed in a certain order. This implies that the subtasks must interface with each other, and that the output of a subtask is used as input for the subtask that follows. A distinction can be made between task partitioning with direct transfer and with indirect transfer. We focus our study on the first case: The output of a subtask is directly transferred from an individual working on that subtask to an individual working on the subtask that follows. As a test bed for our study, we use a swarm of robots performing foraging. The robots have to harvest objects from a source, situated in an unknown location, and transport them to a home location. When a robot finds the source, it memorizes its position and uses dead reckoning to return there. Dead reckoning is appealing in robotics, since it is a cheap localization method and it does not require any additional external infrastructure. However, dead reckoning leads to errors that grow in time if not corrected periodically. We compare a foraging strategy that does not make use of task partitioning with one that does. We show that cooperation through task partitioning can be used to limit the effect of dead reckoning errors. This results in improved capability of locating the object source and in increased performance of the swarm. We use the implemented system as a test bed to study benefits and costs of task partitioning with direct transfer. We implement the system with real robots, demonstrating the feasibility of our approach in a foraging scenario.
Journal Articles
Publisher: Journals Gateway
Artificial Life (2014) 20 (1): 1–3.
Published: 01 January 2014
Journal Articles
Publisher: Journals Gateway
Artificial Life (2009) 15 (4): 465–484.
Published: 01 October 2009
Abstract
View article
PDF
This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot . The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.
Journal Articles
Publisher: Journals Gateway
Artificial Life (2008) 14 (2): 157–178.
Published: 01 April 2008
Abstract
View article
PDF
This article describes a simulation model in which artificial evolution is used to design homogeneous control structures and adaptive communication protocols for a group of three autonomous simulated robots. The agents are required to cooperate in order to approach a light source while avoiding collisions. The robots are morphologically different: Two of them are equipped with infrared sensors, one with light sensors. Thus, the two morphologically identical robots should take care of obstacle avoidance; the other one should take care of phototaxis. Since all of the agents can emit and perceive sound, the group's coordination of actions is based on acoustic communication. The results of this study are a proof of concept: They show that dynamic artificial neural networks can be successfully synthesized by artificial evolution to design the neural mechanisms required to underpin the behavioral strategies and adaptive communication capabilities demanded by this task. Postevaluation analyses unveil operational aspects of the best evolved behavior. Our results suggest that the building blocks and the evolutionary machinery detailed in the article should be considered in future research work dealing with the design of homogeneous controllers for groups of heterogeneous cooperating and communicating robots.
Journal Articles
Publisher: Journals Gateway
Artificial Life (2002) 8 (2): 103–121.
Published: 01 April 2002
Abstract
View article
PDF
In this article, we study the relationship between the two techniques known as ant colony optimization (ACO) and stochastic gradient descent. More precisely, we show that some empirical ACO algorithms approximate stochastic gradient descent in the space of pheromones, and we propose an implementation of stochastic gradient descent that belongs to the family of ACO algorithms. We then use this insight to explore the mutual contributions of the two techniques.
Journal Articles
Publisher: Journals Gateway
Artificial Life (1999) 5 (2): 137–172.
Published: 01 April 1999
Abstract
View article
PDF
This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.