Storing and reproducing temporal intervals is an important component of perception, action generation, and learning. How temporal intervals can be represented in neuronal networks is thus an important research question both in study of biological organisms and artificial neuromorphic systems. Here, we introduce a neural-dynamic computing architecture for learning temporal durations of actions. The architecture uses a Dynamic Neural Fields (DNFs) representation of the elapsed time and a memory trace dynamics to store the experienced action duration. Interconnected dynamical nodes signal beginning of an action, its successful accomplishment, or failure, and activate formation of the memory trace that corresponds to the action’s duration. The accumulated memory trace influences the competition between the dynamical nodes in such a way that the failure node gains a competitive advantage earlier if the stored duration is shorter. The model uses neurally-based DNF dynamics and is a process model of how temporal durations may be stored in neural systems, both biological and artificial ones. The focus of this paper is on the mechanism to store and use duration in artificial neuronal systems. The model is validated in closed-loop experiments with a simulated robot.