Outsourcing Control requires Control Complexity

An embodied agent constantly influences its environment and is influenced by it. We use the sensorimotor loop to model these interactions and thereby we can quantify different information flows in the system by various information theoretic measures. This includes a measure for the interaction among the agent's body and its environment, called Morphological Computation. Additionally, we examine the controller complexity by two measures, one of which can be seen in the context of the Integrated Information Theory of consciousness. Applying this framework to an experimental setting with simulated agents allows us to analyze the interaction between an agent and its environment, as well as the complexity of its controller, the brain of the agent. Previous research reveals an antagonistic relationship between the controller complexity and Morphological Computation. A morphology adapted well to a task can reduce the necessary complexity of the controller significantly. This creates the problem that embodied intelligence is correlated with a reduced necessity of a controller, a brain. However, in order to interact well with their surroundings, the agents first have to understand the relevant dynamics of the environment. By analyzing learning agents we observe that an increased controller complexity can facilitate a better interaction between an agent's body and its environment. Hence, learning requires an increased controller complexity and the controller complexity and Morphological Computation influence each other.


Introduction
Every embodied agent, whether it is an animal, a human or a robot, exists in constant interaction with its environment.The morphology of an agent's body has a significant impact on the nature of this interaction.The authors of the book "How the body shapes the way we think: a new view of intelligence" [25] emphasize the importance of this interaction and the influence of it on the structure of the control architecture, the brain of an agent.In [26], the authors express this notion more precisely in the following way.
"Second, there is a kind of trade-off or balance: the better the exploitation of the dynamics, the simpler the control, the less neural processing will be required." This suggests that the amount of Morphological Computation has an impact on the controller complexity.Our previous research with simulated agents, published in [18], confirms this intuition.This leads to the problem that a well-designed morphology would make a complex control architecture superfluous.In this work we extend the framework to include the process of learning a new task.Thereby we are able to observe that a high controller complexity can facilitate a better interaction with the environment, a higher Morphological Computation.Furthermore, agents with a simplified control architecture seem to be almost unable to learn a good strategy.Hence, we conclude that an agent needs an increased controller complexity in order to learn and that both concepts, the controller complexity and Morphological Computation, influence each other.
In the next section we describe our main results and the intuition behind these in more detail.

Intuition and Main Results
Learning a new task and adapting to changes in the environment poses a difficult challenge.An important aspect of this is to be able to predict what is happening next and especially what the outcomes of the own actions would be.We theorize that even for seemingly easy problems, which can be solved without much involvement of the brain, first learning to perform a task requires complex computations in the control architecture.We illustrate this aspect in the following example.
Consider a child, who tries to learn to ride a bike.Nearly every task the child has learned up to this point, e.g.walking, speaking or drawing, becomes harder when one tries to do it fast.So the child expects that moving slowly would lead to the best outcome.According to its understanding of the world, its world model, riding a bike slowly is easier than doing it fast.Unfortunately in this case, speed stabilizes a bike and is therefore beneficial for learning.The child is working with an inaccurate world model.So before the child can learn to ride a bike, it has to observe and understand that faster can mean easier.It has to update its world model in order to learn and to be able to use the world in an optimal way.
In this work we closely examine the information flows in a learning agent.There we are especially interested in the controller complexity, quantified by two measures that refer to different parts of the controller.The first measure can be seen in the context of the Integrated Information Theory of consciousness, as introduced in the next section.It quantifies the information integration among the controller nodes across different points in time, whereas the second measure assesses the complexity of the agent's internal world model, which forms a prediction about the next sensory state.Both measures follow the notion that a system is complex, if it is more than the sum of its parts.We theorize that the agent has to combine its available information sources, either by using Integrated Information or a complex internal world model, in order to be able to learn.
We use simple simulated agents and observe how the complexity of the controller develops during the learning process.At first we consider agents that do not have to learn a world model.These agents are able to detect and directly include the dynamics of the environment in their own world model by sampling the environment.We refer to these agents as PWM agents, which stands for perfect world model, and observe that they do need next to no involvement of the controller.However, when we vary the accuracy of their world model then the Morphological Computation increases with the accuracy of the world model.At the same time, the influence of the controller on the behavior of the agents is high for an inaccurate world model and decreases as the quality of the world model improves.
Furthermore, we refer the agents that have to form an internal world model as agents with a learned world model, LWM agents for short.These agents have initially a high information integration in the controller and then this value decreases, in case they are successful.We hypothesize that this is because the agents first have to learn the correct world model, before they are able to optimally utilize the interaction of their bodies with the environment, which in turn leads to a lower controller complexity.This theory is supported by the result that agents that are not as successful have a constantly high controller complexity and a lower Morphological Computation compared to the successful agents.Therefore our first main result is the following.
1.An agent that understands its environment, meaning it has an accurate world model, exhibits a higher Morphological Computation and lower controller complexity compared to agents with an inaccurate world model.The better an agent understands its environment, the more it can exploit the interactions between body and environment and the less controller complexity is needed.
Additionally, we analyze agents with a simplified control architecture.More precisely, we examine agents that are not able to integrate information in the controller and call them "split" LWM agents, opposed to the complete ones.Hence, the controller complexity of these agents is solely determined by the internal world model.Those perform significantly worse compared to complete LWM agents.This leads to the observation that the few successful split LWM agents combine their different information sources directly in the world model in order to predict the next sensory state.This leads to our second result.
2. In order to learn a successful behavior the agents have to combine information from different sources.This leads to an increased controller complexity either directly between the controller nodes, in form of Integrated Information, or in the prediction process given by the internal world model.
In the next section we introduce the theoretical background and outline of this work.

Theoretical Background and Outline
In this work we analyze the dynamics of the brain complexity and Morphological Computation of simple, simulated agents.These agents are faced with the task of not touching the walls of their environment, which is described in more detail in Section 2. We model the agents in their environment by a sensorimotor loop.This reflects the interactions among the sensors S, the actuators A and the controller nodes C and can be translated to probability distributions that define the behavior of the agents.This is discussed in Section 3. In addition to their sensor, actuator and controller nodes some agents also have an internal prediction S ′ .We call the mechanism that generates a prediction the "internal world model".Such an internal world model was also used in [6] and [7].The agents that are equipped with an internal world model are called LWM agents.This approach allows us to analyze the information flows among the different parts of the agents, and especially its prediction, in detail.
The applied learning algorithm is based on a modification of the em-algorithm, a well-known information geometric algorithm, discussed for example in [10,3].It alternates between optimizing the agent's behavior and updating its world model.The first part, optimizing the behavior, is done by maximizing the likelihood of a goal variable.This follows the reasoning of the approach described by Attias in [5], and further analyzed in [30,29], called "Planning as Inference".We also use this optimization in [18], where it is described in Section 3.1 and in more detail in the Supplementary Material.
While the agents learn, we calculate various information theoretic measures defined in Section 4. One of these measures comprises part of the controller complexity and assesses how much information is integrated in the controller.This can be seen in the context of Integrated Information Theory (IIT), originally proposed by Tononi.The core idea of IIT is that the level of consciousness of a system can be equated with the amount of information integration among different parts of it.This theory developed rapidly from a measure for brain complexity [28] towards a thorough theory of consciousness [27,24,9].Hence there exist various types of Integrated Information measures depending on the version of the theory they are referring to and the setting they are defined in.Here, we use the information geometric measure that we propose in [17] as a measure for the controller complexity.Thereby, we follow the suggestion of the authors of [19] to adopt a more pragmatic point of view on Integrated Information measures.
In [18] we compare the Integrated Information of an agent with its Morphological Computation.The term "Morphological Computation" describes the reduction of computational cost for the controller that results from the interaction of the agent's body with its environment.One example where Morphological Computation is applied is the field of soft robotics.There the softness of the robots' bodies leads to a lower computational cost when they, for example, grab fragile objects, [15,22,23].The concept of Morphological Computation is discussed in more detail in [21] and [12].In the field of embodied artificial intelligence, the cheap design principle, formulated in [25], states that a robot's body should be constructed in a way that best exploits the properties of the environment.This should lead to a simpler control architecture.
We confirm this intuition in [18], where the comparison between the controller complexity and Morphological Computation leads to the result that they have an antagonistic relationship.On the one hand this is intuitive, since the more the agent relies on the interaction of its body with the environment to solve a task, the less involvement of the controller is needed.On the other hand this leads to the problem that now embodied intelligence is correlated with reduced involvement of the brain.If the morphology of an agent's body is intelligent enough, would it need a control architecture at all?
Here we want to present one possible solution, given by the challenge of learning.As discussed in the introduction, learning to perform a task entails updating an internal world model in order to predict the outcome of ones actions.Hence, we measure the controller complexity here not only via the Integrated Information but also by the complexity of the internal world model.We hypothesize that a learning process requires the agent to highly integrate the available information, hence that learning requires an increased controller complexity.In [11], the authors conclude that Integrated Information increases with the fitness of evolving agents.The authors of [2] increase the complexity of the environment, which leads to higher Integrated Information and in [1] high Integrated Information benefited rich dynamical behavior.All these results are clear indications that a high information integration in the controller is beneficial for an embodied agent, that is faced with a task.
Using the simulations, that we introduce in Section 2, we first consider the results of a set of agents that do not need to form an internal world model in Section 5.1.These agents sample their environment, meaning that they instantaneously and accurately include their experiences in their world model.We will refer to these agents as perfect world model (PWM) agents.In this utopian situation, agents do not require a complex controller in order to learn.In fact, they do not require a complex controller and behave mainly through reactive mechanisms, as long as their world model is accurate.In contrast, the LWM agents, the ones that have to build their own internal world model, require an increased controller complexity in order to learn.Once their world model is learned, the Integrated Information value decreases, since the agent can then make use of the interactions with its environment, measured by Morphological Computation.These results are discussed in Section 5.2.
In Section 5.3 we additionally take a close look at the dynamics of the prediction process.We calculate an information theoretic measure on the internal world model that quantifies the synergistic influences of the different information sources on the prediction.Additionally, we compare the complete agents with ones that have a simplified controller.These agents are not able to integrate information in their controller and are therefore called "split".We see that the latter are barely, if at all, able to learn to perform the desired task.The ones that do learn to perform the task have a complex prediction mechanism that combines information there, in order to compensate for the missing information integration in the controller.

Setting of the Experiment
In our experiment, we analyze the information flows of simplistic, 2-dimensional, acting agents.An agent consists of a round body with a small tail and two binary sensors.These sensors are visualized in Figure 1 on the left as lines that are green when they detect a wall and black otherwise.There five of these agents are depicted in a racetrack.This racetrack is their environment in which they have to move.The agents have four different movements, fast forward (approx.0.6 unit length per step), slow forward (approx.0.2 unit length per step), left and right (with approx.14 • and a speed of 0.4 unit length per step).Whenever the body of an agent touches a wall, the agent gets stuck.This means that it can only turn on the spot, but will not move away unless both sensors do not detect a wall.The implementation and a video of this movement can be found at [16].
Varying the length of the sensors directly influences the amount of information an agent receives about the world and hence it can influence the quality of the interaction of the agent with its environment.Therefore this has an impact on the potential Morphological Computation.In [21] this is called "Morphology facilitating perception" and it is discussed in more detail in [18].

The Agents and the World Model
The agents are modeled by discrete multivariate, time-homogeneous Markov process (X t ) t∈N = (S t , A t , C t ) with the state space X = S × A × C.Here the variable S t describes the two binary sensors that detect a wall and a binary variable that encodes whether the body of the agent is touching a wall or not.The node A t includes the two binary actuators and C t the two binary controller nodes.Additionally, we introduce another variable S ′ t , that describes the internal prediction of the next sensor state and hence consists of three binary variables.This is not necessary in case of the PWM agents as discussed in the context of Figure 3.
The elements of the agents are connected according to the graph in Figure 2.This graph then leads to the distribution Here, we depict in each figure only one node for each S, S ′ , A and C in order to increase clarity.
Note, that since S ′ t is a prediction of S t , it is made of the same substrate, hence the state space of S ′ t is S. The difference between S t and S ′ t lies in the mechanism with which they are generated.The node S t is influenced by the information from S t−1 and A t−1 .These are indirect influences, because in this case the information flows through the environment.The role of the environment is discussed in more detail in [18].
Figure 2: The sensori-motor loop of the learning agents.
The conditional distribution, P (S t+1 |S t , A t ), is called world model in [31] and [20].The internal prediction S ′ t is generated by P (S ′ t+1 |A t , C t ), which was also named "world model" in [6], [7] and [8].To prevent confusion we refer to P (S ′ t+1 |A t , C t ) as "internal world model" and to P (S t+1 |S t , A t ) as "empirical world model".We chose the term empirical world model, since the agents gain this distribution by sampling their experience.
We sample the distributions P (S t , A t , C t ) and P (S t+1 |S t , A t ) as described in more detail in [31] and we denote the sampled distributions by P (S t+1 |S t , A t ) and P (S t , A t , C t ).
Additionally we analyze the behavior of four different variations of agents.The first type belongs to the agents that have no internal world model, as depicted on the left in the top row in Figure 3.These agents have direct access to their empirical world model and they are called PWM agents.The next type of agent consists of the LWM ones, depicted on the bottom left of Figure 3.These agents have an internal world model.Lastly, we differentiate between complete agents and ones with simplified control architectures with which they are not able to integrate information in their controllers.This is done by ensuring that the controller node C i t+1 only receives information from C i t and not from C j t , i, j ∈ {1, 2}, i ̸ = j, as depicted in Figure 3 in the second column.We refer to them as split PWM and split LWM agents, respectively.
Figure 3: The connections of the complete and split PWM agents on the top row and the complete and split LWM agents in the bottom row.

Learning
In the setting described in this article the agent learns while it is inside the racetrack.Hence, the realized states s t−1 , a t−1 , c t−1 are known at each step t and the agent can use these certainties.To that end we need the following definitions.
Let P ct (C t+1 |S t+1 ) be the probability distribution of C t+1 conditioned on S t+1 and a fixed state c t : From an internal, agent-centric perspective, the predictive process is as depicted in Figure 4 with the PWM agents on the top and the LWM agents in the bottom.The PWM agents are able to use the sampled world model for the prediction, here denoted by Ŝ, whereas the LWM agents make use of their internal world model to arrive at the internal prediction S ′ .Note, that the s t is the actual realized last sensor state and not an internal prediction.Both types of agents can optimize the following distributions In addition, the LWM agents also learn their internal world model given by P (S ′ t+2 |A t+1 , C t+1 ).In our previous publication [18], we used the concept of "Planning as Inference" in order to optimize the behavior of the agent and we will apply the same algorithm in the case of the PWM agents.In this method the conditional distributions are optimized with respect to a goal variable by using the em-algorithm.This is a well-known information geometric algorithm that is guaranteed to converge, but might converge to a local minimum [3,4].
In the case of the LWM agents we have two goals.We want to optimize the distributions P ct (C t+1 |S ′ t+1 ) and P (A t+1 |S ′ t+1 , C t+1 ) such that the probability of touching the wall after the next movement is as low as possible, while keeping the internal world model P (S ′ t+1 |C t , A t ) close to the empirical world model P (S t+1 |S t , A t ).The second goal is important, because otherwise the optimization of the behavior would use faulty assumptions leading to a failure of the agent.In the example in the introduction this would be the child trying to learn to ride a bike, while going as slow as possible.Hence, both of the world models should result in similar predictions.These are highlighted in Figure 5. Note, that the controller has only two binary variables, whereas the sensors consist of 3 binary variables.Therefore merely copying the information from the sensors is not a viable strategy for the agents.This is also a natural setting, because we are not able to consciously perceive every detail from our environment that our sensors are able to pick up, but we learn to distinguish between important and irrelevant information.
Furthermore, we add gaussian noise to the distribution P (A t+1 |S ′ t+1 , C t+1 ), because the em-algorithm can not gain a positive value again once it reaches a point where for some action a t+1 the equality P (a t+1 |s ′ t+1 , c t+1 ) = 0 holds.

Measures of the Information Flow
We measure the importance of an information flow by calculating the difference between the actual distribution and the closest distribution without the information flow in question.The set of distributions without this information flow is called a split system.More precisely, the measures for the different information flows in the system are defined in the following way.
Definition 1.Let M be a set of positive probability distributions on a state space Z.Then we define the measure Ψ M , by minimizing the Kullback-Leibler divergence between the split system M and the full distribution P Most of the discussed measures have a closed form solution and can be written in the form of sums of (conditional) mutual information terms.The mutual information I(Z 1 ; Z 2 ) and the conditional mutual information, I(Z 1 ; Z 2 |Z 3 ), are defined as follows .
These formulas can be interpreted in the following way.If Hence, this quantifies the connection between Z 1 and Z 2 , given the influence of Z 3 .In the following section we emphasize the connection that we are measuring by a dashed arrow.

Controller Complexity
We assess the controller complexity using two different concepts that refer to different parts of the controller.First we discuss the measures corresponding to the Integrated Information, before we quantify the complexity of the internal world model in Section 4.1.2.

Integrated Information
There exist various types of Integrated Information measures, as discussed in the introduction.
The measure for Integrated Information we are using here was defined in our previous publication [17], also applied in [18].It measures how much information gets integrated among different nodes across different points in time, as depicted in Figure 6.The minimization described in the previous section results in the closed form solution below, as shown in [17].
In our case we only have two binary controller nodes, hence J = {1, 2}.
Note that the split LWM and PWM agents do not have these connections, hence Φ IIT = 0 in these cases.
The importance of the Integrated Information for the behavior of the agent additionally depends on the information flowing to and from the controller, as observed in [18].These information flows are quantified by the two following measures, namely sensory information and control.We quantify the importance of the information flow from the sensory nodes to the controller nodes by a measure called "sensory information".The graphical representation of the split system is depicted in Figure 7 and the closed form solution of this measure is If this value is zero, then the controller nodes do not depend on the sensory input and therefore cannot make any behaviorally beneficial contributions.
Additionally, the strength of the connection from the controller nodes to the actuator nodes is measured by and called "control".An agent in which its controller has no influence on the actuator at all has Ψ C = 0.
We combine these three measures to quantify the impact of the Integrated Information on the behavior of the agent.This additional measure is defined as the product of the Integrated Information, the sensory information and control and called "effective information integration".

Internal world model
We analyze the prediction, defined by the internal world model P (S ′ t+1 |A t , S t ), by calculating how important the interplay between the influences of A t and C t on S ′ t+1 is.
This measure has no closed form solution.Here we define a split system Q, where the two-way interactions are fixed to the two-way interactions of the original model P , but there is no combined influence from (A t , C t ) on S ′ t+1 .
Hence we will call it synergistic prediction, Ψ SynP .This is conceptually similar to the synergistic measure for Morphological Computation proposed in [13] and we also use the iterative scaling algorithm to calculate this measure, as described in Section 2.5 in [13].

Morphological Computation
The concept of Morphological Computation describes the reduction of the necessary computation in the controller that results from the interaction of the agent's body with its environment.There exist various types of Morphological Computation, [21], and different measures for it.We use the following formulation Ψ M C = I(S t+1 ; S t |A t ).
This measures the information flow through the world going from one sensory state to the next one, given the actuator state, as depicted in Figure 9.In [14] this was introduced as a measure for Morphological Computation and in [12] in a comparison with other measures Ψ M C shows desirable properties.

Results
In this section we discuss the results of our simulations.We used 1000 random input distributions for each sensor length and each type of agent.All agents train for 20 000 steps and the measures are calculated for 90 different points during these steps.More precisely, we apply the measures for the 9 time points listed below 50 100 200 500 1 000 2 000 5 000 10 000 20 000 and 9 equidistant time points between each of them, as well as 9 equidistant time points between zero and 50.
Additionally, we calculate the success rate of an agent by sampling how many time points the agent is stuck at a wall during the 20 000 training steps.Hence a success rate of 0.1 signifies that an agent was stuck 90% of the time.We then divide the agents in successful and unsuccessful ones based on their success rates.Taking the best third of the complete LWM agents leads to agents performing above 16.8% to be called successful, while we refer to agents with a success rate below 16.8% as unsuccessful.Dividing the agents there allows us to call only the agents successful for which the success rate increased significantly during learning.
In Figure 10 we can see results of the measures for the controller complexity, namely Integrated Information and synergistic prediction, as well as the Morphological Computation averaged over all successful LWM agents after 20 000 steps.We observe that these two quantities behave antagonistically.Therefore the results confirm our previous observation, published in [18].Note, that when the sensors are too long, so that the agents almost always detect a wall, this additional information is no longer beneficial for the agents and the Morphological Computation no longer increases, while the Integrated Information and synergistic prediction increase again.
This relationship leads to the question, why agents with a well-adapted morphology would need a complex control architecture.Wouldn't it be possible to build agents that are so well adapted to their environment that a simple controller suffices?There might be several reasons why a complex controller is necessary in general, despite this antagonistic relationship, as we discuss further in Section 6.
Here we argue that an involvement of the controller is necessary, since agents first have to learn how to interact with their environment, meaning they have to build their own world models.

The Agents with a Perfect World Model (PWM)
Now, we discuss the results for the PWM agents that do not have to learn their world models.These are the agents with a direct access to their empirical world models.The best approx.33% of the PWM agents are the ones with a success rate above 61, 5%, which we term the successful PWM agents.Hence, the PWM agents perform overall much better than the LWM agents, for which the best third only performs better than 16.8%.We depict the Integrated Information, sensory information, control and effective information integration for the successful PWM agents in Figure 11.
The controller complexity, given here solely by the Integrated Information value due to the lack of an internal world model, seems to not change after the first few initial steps.In [18] we discuss that the importance of the controller complexity additionally depends on the sensory information and the control.
While the sensory information increases with the sensor length, we can see the reason for the behavior of the Integrated Information in the results for control, Ψ C .After the first steps this measure is very close to 0, with an average value of 0.0021 at the 20 000th step.If Ψ C = 0, then the controller has no influence on the behavior of the agent at all.It is easy to check that in this case the information flow in the controller is not changed by the em-algorithm anymore, since the controller has no influence on whether the agent is successful or not.This only holds for the PWM agents, because we apply the original em-algorithm here, not the modified one.
Figure 11: The Integrated Information, sensory information, control and effective information integration for the successful, PWM agents.
The effective information integration, on the top right of Figure 11, summarizes the behavior of the other three measures.This has a value close to zero which shows that the brain complexity is nearly irrelevant for the behavior of the agent in this case.
Hence, for the PWM agents a complex controller is not needed in order to learn to perform a task.In fact, PWM agents without the ability to integrate information perform only slightly worse than complete ones, more precisely, the split PWM agents have an average success rate of 33.69% compared to 33.83% in the complete case.
In this scenario, the success does not depend on the complexity of the controller, but on the interaction of the agent with its environment.We therefore now directly compare the Morphological Computation and controller complexity of successful and unsuccessful PWM agents, depicted in Figure 12.The successful agents have a much higher Morphological Computation over all.The Morphological Computation measures how much the next sensor state depends on the last sensor state, given the actuator nodes, and is calculated using the empirical world model.This means that the successful agents found strategies to move in their environment and use the interaction with the environment in a way, such that the next point in time is more predictable, compared to the unsuccessful agents.
Additionally, the Integrated Information is overall higher in the case of the unsuccessful agents.There the agents have a lower Morphological Computation and we again observe an antagonistic relationship between these two quantities.Previously we noted that the Integrated Information is not influenced by the em-algorithm after the first steps, however the observation made here refers to the value that the algorithm reaches exactly during these first steps.To conclude, in a situation where we have a PWM agent with access to its correct world model and with a morphology which is well-adapted to its environment then these PWM agents have no need for a control architecture, a brain.
In order to further examine this connection between the quality of the world model and the need for a complex controller we additionally analyze agents that are only able to sample their empirical world model for a part of the total 20 000 steps.These agents sample the empirical world model and learn their behavior only up to a certain point.After that point the world model stays fixed and the agents have to use this possibly inaccurate world model to find the best behavior for the remainder of the 20 000 steps.We distinguish between 9 different cases, namely agents that sample the world model for 50, 100, 200, 500, 1 000, 2 000, 5 000, 10 000 or the full 20 000 steps.
In Figure 13 we highlight the relationship between Morphological Computation and effective information integration with respect to the accuracy of the world model on the x-axis.There we display the arithmetic mean over the different sensor length after 20 000 steps.
While the Morphological Computation increases with the accuracy of the world model, the effective information integration decreases.
In the introduction we motivate the intuition behind these concepts using the example of a child learning to ride a bike.The better the child understands the dynamics of its environment, the more it can make use of them and the faster it drives to stabilize the bike.Hence, a better world model leads to a higher Morphological Computation which then reduces the necessity for a complex controller.This concludes the analysis of the accuracy of the empirical world model in relationship to the information flow inside the agents.In the next section we discuss the LWM agents that have an internal world model.

The LWM Agents
Here we discuss the results for the LWM agents, which have to learn the dynamics of the world by forming an internal world model.We first focus on the measures for the controller complexity, called Integrated Information and synergistic prediction.The two top rows of Figure 14 depict on the left the success rate of the complete LWM agents that are not successful with a success rate < 16.8%.In the middle of this figure there is the Integrated Information and the right shows synergistic prediction.The second row consists of the 2-dimensional perspectives, meaning the success rate, Integrated Information and synergistic prediction with respect to the number of steps.The unsuccessful agents have an Integrated Information value around 0.3 and 0.38 and a synergistic prediction around 0.1 and 0.15.Now we compare these results to the Integrated Information value of the successful agents.The bottom rows of Figure 14 show the success rate of the successful LWM agents on the left, the Integrated Information in full in the middle and the synergistic prediction on the right with the 2-dimensional perspectives in the last row.There we can observe that the Integrated Information and synergistic prediction decrease with the sensor length as discussed in the context of Figure 10.Additionally, there is a first increase in the Integrated Information and synergistic prediction values in the first 400 steps and then a strong decrease.After 20 000 steps the Integrated Information value is roughly between 0.05 to 0.15 and the synergistic prediction lies between 0.04 and 0.08.Hence, in the case of the successful agents the complexity of the controller reduces to a significantly lower value after time, compared to the unsuccessful agents.Following the observations of the previous section, Section 5.1, this leads to the conclusion that a high controller complexity might be important as long as the agents have not been able to learn the correct world model.Without a correct world model the agents are not able to find a strategy, which would allow them to optimally use their interaction with the environment.
In order to interpret these results in connection to the learning behavior of the agents further, we now discuss the values for the sensory information, control and Morphological Computation.These first two give insights to the effect the Integrated Information has on the action of the agent and combined lead to the effective information integration.In Figure 15 we depict these three measures in the first three columns with the successful agents in the top and the unsuccessful ones in the bottom row.The sensory information and control decrease with the number of steps taken for the successful, as well as the unsuccessful agents.However, there is a clear difference in the overall values of these measures, which leads to the effective information integration of the successful agents being around 0.002, while this value reaches in average 0.03 in case of the unsuccessful agents.Hence, the Integrated Information is not only higher for the unsuccessful agents, it also has more impact on the behavior of the agents.The last column of Figure 15 depicts the Morphological Computation for successful and unsuccessful agents.There we observe that the successful agents reach a higher Morphological Computation value, in average 1.64, compared to a value of 1.5 in case of the unsuccessful agents.
These results support our hypothesis.A high controller complexity value seems to be important as long as the agents have not been able to learn to interact with their environment.Hence, the Morphological Computation is lower for the unsuccessful agents while the complexity and involvement of the controller is higher.Now the question remains, whether a high controller complexity is really necessary for learning or just a byproduct of the Morphological Computation being low.In order to clarify that point we now look at the split LWM agents, which have a simplified control architecture.

Comparing the split and complete LWM Agents
The architecture of the split LWM agents is depicted on the bottom right in Figure 3.These agents are not able to integrate information between their controller nodes, hence the complexity of the controller solely depends on the structure of the internal world model.
We divide these agents into successful and unsuccessful ones by applying the criterion for success from the LWM agents, of 16.8%.The split agents perform worse and this does not lead to a 1/3, 2/3 split.However, thereby we are able to directly compare complete LWM and split agents with a similar success rate.
First, we consider the average success rates of the split and complete LWM agents.In addition, we compare them with the average success rate of agents that perform only random movements and do not learn at all.The results are given in The split LWM agents perform in average barely better than the agents that move randomly.Note, that there is also a considerable difference in the number of successful agents.Only approx.2.1% of split LWM agents are successful, compared to 33.3% of the complete ones.
In summary, the split agents perform only marginally better than agents that move purely at random and only very few split agents are successful.This strongly supports the hypothesis that an increased controller complexity is necessary for learning.
Additionally, we now focus on the internal world model of the few successful agents.Here, we compare the synergistic prediction of the successful, complete LWM and the successful, split LWM agents.The results are shown in Figure 16.The synergistic prediction quantifies how important the interaction of both influences, from the actuators as well as from the controller nodes, are for the prediction.It is noticeable that the synergistic prediction is much higher for the successful agents that are not able to integrate information.This leads to the conclusion that for these split agents the internal world model, and therefore the prediction process, has to combine the information from different sources and become much more complex.The complete LWM agents are able to integrate the information directly between their controller nodes and do not need such a complicated world model to have a complex controller.

Discussion
In this article we discuss the dynamics of the Morphological Computation and controller complexity in learning, embodied artificial agents.As a training algorithm we use an adapted em-algorithm, that alternates between optimizing the behavior to reach a goal and updating the internal world model.These agents move inside a racetrack and learn to not touch the walls.Using this simplistic example, we are able to analyze the different information flows inside the agents and especially examine the process of predicting the next sensory state.
The results of our experiment regarding the controller complexity and Morphological Computation support our previous publication [18] and show the antagonistic relationship between them.These previous results lead to the insinuation that agents with a highly adapted morphology might have no use for a complex control architecture.There are many possible ways to solve this problem.It might be that our tasks are simply too easy to solve, so that an agent truly only needs Morphological Computation in order to be successful.Another possibility is given by the authors of [26] "The more the specific environmental conditions are exploited -and the passive dynamic walker is an extreme case -the more the agent's success will be contingent upon them.Thus, if we really want to achieve brain-like intelligence, the brain (or the controller) must have the ability to quickly switch to different kinds of exploitation schemes either neurally, or mechanically through morphological change." Hence, the agents might have no need for a controller, because they are only faced with one single task, namely avoiding the walls of their environment.Furthermore, maybe the nature of the task should require a higher order understanding of its surroundings, so that the agents truly need to process the information from the environment.Therefore we will develop this approach further in order to explore these possibilities and apply it to more involved settings.
Despite the simplicity of our example, we were able to offer an additional solution to the posed problem.We theorize that learning to predict the environment results in a necessity for a complex controller.PWM agents, that do not have to learn to predict their environment, do not require a complex controller at all, not even to learn our task.However, when their ability to form an accurate world model is restricted the involvement of the control architecture increases.
The LWM agents, on the other hand, show a necessity for an increased controller complexity in general.
The controller complexity of the successful agents is first high, while the agents learn their world model, and then it decreases.We argue that this decrease could result from a rise in Morphological Computation that is facilitated by the correct world model.This is supported by the results for Morphological Computation, which is higher in the case of the successful agents.Hence, the two quantities, the controller complexity and the Morphological Computation, influence each other.
Comparing the complete LWM agents with the split ones, which have a simplified controller and are not able to integrate information, leads to the observation that the latter agents are not able to predict the next sensory state as well.The split LWM agents perform in average only marginally better than completely randomly moving agents and there is only a very small percentage of successful split LWM agents.Hence, learning requires an increased controller complexity.
Furthermore, the few successful, split LWM agents have a more complex prediction process.This process itself combines the information from the controller and the actuator nodes in order to form a prediction of the next sensory state.This again supports the claim that an agent needs to integrate its available information in order to learn.In this case the complex process is not directly between the controller nodes, but inside the internal world model.
In [18] we iteratively project between these two sets in order to find the distribution in M P A that is closest to M P G .This would be the distribution that describes a valid agent and has a high likelihood of achieving the goal.This approach is also called planning as inference, [5,30,29].The approach is guaranteed to converge, but might converge to a local minimum.
In our case we want to adapt this approach in order to simultaneously learn the internal world model.The distribution P (S ′ t+1 |A t , C t ) predicts the next sensory input and reflects therefore the agent's understanding of its environment.Hence we want to optimize our world model such that P (S ′ t+2 |S t+1 , A t+1 ) = P (S t+2 |S t+1 , A t+1 ), where P is the sampled, empirical world model.Note, that we require the goal to be a requirement on a joint distribution, not a conditional, hence the actual optimization works with P (S t+1 , A t+1 )P (S ′ t+2 |S t+1 , A t+1 ) = P (S t+1 , A t+1 ) P (S t+2 |S t+1 , A t+1 ).
The joint distribution P (S t+1 , A t+1 ) is fixed to the joint distribution resulting from the last step in the algorithm.
Similar to the agent manifold above, we also define a world agent manifold Now we are able to define the algorithm depicted in Figure 17.

Figure 1 :
Figure 1: Five different agents in their environment (left) and the possible sensor length from 0.5 on the top right to 2 on the bottom right.

Figure 4 :
Figure 4: The sensori-motor loop from the perspective of the agent for the PWM agents on the top and LWM agents in the bottom figure.

Figure 5 :
Figure 5: The sensory-motor loop with with a highlighted empirical and internal world model.

Figure 6 :
Figure 6: Graph corresponding to the split system in case of ΦIIT .

Figure 7 :
Figure 7: Graph corresponding to the split system in the case of ΨSI and ΨC .

Figure 8 :
Figure 8: Graph corresponding to the split system in case of ΨSynP .

Figure 9 :
Figure 9: Graph corresponding to the split system in the case of ΨMC .

Figure 10 :
Figure 10: Morphological Computation, Integrated Information and Synergistic Prediction for the successful, complete LWM agents.

Figure 12 :
Figure 12: The Morphological Computation and Integrated Information of the PWM agents in case of successful agents on the left and unsuccessful agents on the right.

Figure 13 :
Figure 13: Morphological Computation and effective information integration for the successful PWM agents with a varying accuracy of the world model.

Figure 14 :
Figure 14: The two top rows consist of the success rate, Integrated Information and synergistic prediction resultsfor the unsuccessful LWM agents and the bottom rows depicts the same results for the successful agents.

Figure 15 :
Figure 15: The measures for sensory information, control, effective information integration and Morphological Computation for the successful, LWM agents in the top and for the unsuccessful agents in the bottom row.

Figure 16 :
Figure 16: Comparison of the synergistic prediction in case of the successful LWM agents in the first and for the successful split LWM agents in the second column.

Table 1 :
Arithmetic mean of the success rates of agents with random movement, the complete and split LWM agents.