Abstract
Deterministic behavior can be modeled conveniently in the framework of finite automata. We present a recurrent neural network model based on biologically plausible circuit motifs that can learn deterministic transition models from given input sequences. Furthermore, we introduce simple structural constraints on the connectivity that are inspired by biology. Simulation results show that this leads to great improvements in terms of training time, and efficient use of resources in the converged system. Previous work has shown how specific instances of finite-state machines (FSMs) can be synthesized in recurrent neural networks by interconnecting multiple soft winner-take-all (SWTA) circuits - small circuits that can faithfully reproduce many computational properties of cortical networks. We extend this framework with a reinforcement learning mechanism to learn correct state transitions as input and reward signals are provided. Not only does the network learn a model for the observed sequences, and encode it in the recurrent synaptic weights, it also finds solutions that are close-to-optimal in the number of states required to model the target system, leading to efficient scaling behavior as the size of the target problems increases.