Selective attention is a very efficient strategy for engineering active vision systems that need to extract relevant information from the scene in real-time. We propose an implementation of a saliency-map based active vision system in which Address-Event sensors and neuromorphic winner-take-all devices complement conventional imagers and machine vision components. A standard imager is mounted next to a Dynamic Vision Sensor (DVS) on a Pan-Tilt Unit. The output of the DVS is fed to an event-based Selective Attention Chip that implements a Winner-Take-All network with inhibition of return, to identify and sequentially select the most salient regions in the visual input space, and drive the Pan-Tilt Unit accordingly. We characterize the system with experiments using real-world scenarios and natural scenes, and interface it to a workstation to implement models of top-down attention used to influence the decision making process.