Abstract
In the fields of neuroscience, psychology and robotics, an important question is how to establish a unified system that will autonomously acquire both its input state space and optimal goal-oriented action policies in unknown environments. An important requirement for a such a system is to understand how multiple sources of sensory information can be integrated to support autonomous behavior. So far, Distributed Adaptive Control (DAC), a self-contained neuronal system, used only egocentric cues to achieve goal-directed behavior in a foraging task. However, implicitly acquired navigation strategies are not well exploited. In this paper, we evaluate the hypothesis that learned ego-centrically defined behavioral strategies can be improved by the integration of allocentric spatial information. Using an extension of the DAC architecture in the context of random foraging, we show that this integration can be rather seen as an instance of Bayesian inference as opposed to selective attention. We provide an extensive analysis of the architecture and compare its performance in the broader context using a known robotics algorithm. Our results further support the belief that a Bayesian framework can provide for a unified view on the organization of goal-oriented behavior.