Skip to Main Content

In Table 1, a comparison of the average fitness results of the best agents controlled/trained by various algorithms is given. In the table, “Perfect agent” (PA) refers to the results of an agent controlled by hand-coded rules (i.e., not controlled by an ANN). In this case, the agent has “perfect knowledge” about the problem from the beginning of the trial (therefore, there is no lifetime learning).

Table 1:

Foraging task: average fitness results of the agents controlled/trained by different algorithms. The details of rules ID:1 and ID:18 can be found in Tables 2 and 3.

AlgorithmFitnessStdLearning Type
Perfect Agent (PA) 67 8.28 Hand-coded 
Hill Climbing (HC) 59 19.70 Offline optimization 
Evolved Synaptic Plasticity (rule ID:1) 50 9.91 Lifetime learning 
Discrete Hebbian/anti-Hebbian (rule ID:18) 0.2 6.13 Lifetime learning 
AlgorithmFitnessStdLearning Type
Perfect Agent (PA) 67 8.28 Hand-coded 
Hill Climbing (HC) 59 19.70 Offline optimization 
Evolved Synaptic Plasticity (rule ID:1) 50 9.91 Lifetime learning 
Discrete Hebbian/anti-Hebbian (rule ID:18) 0.2 6.13 Lifetime learning 

Close Modal

or Create an Account

Close Modal
Close Modal