In pinpoint search strategies, gradual expansion of walkers’ trajectories might be efficient for the wide-range exploration when their goal locates unpredictably. However, small zigzag-shaped movements are also important to achieve the small-range exploration. An important issue is regarding rules guiding random walkers to execute such an optimized walk. Here, starting with a simple expansion model, we investigated how flexible exploration is achieved if a random walker detects whether the current position is a previously visited position and alters its directional rule based on the recent experience. The agent modifies its directional rule if the current rule disturbs recent flow of the agent’s movement. We showed that our model exhibits scale-free movements so-called Lévy walks, which achieve both the local search and the global search. In addition, our model presents power-laws in respect with the first return time. These results suggest that stochastic coordination of the directional rule might produce an adaptive movement strategy.

This content is only available as a PDF.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.