In artificial vision applications, such as tracking, a large amount of data captured by sensors is transferred to processors to extract information relevant for the task at hand. Smart vision sensors offer a means to reduce the computational burden of visual processing pipelines by placing more processing capabilities next to the sensor. In this work, we use a vision-chip in which a small processor with memory is located next to each photosensitive element. The architecture of this device is optimized to perform local operations. To perform a task like tracking, we implement a neuromorphic approach using a Dynamic Neural Field, which allows to segregate, memorize, and track objects. Our system, consisting of the vision-chip running the DNF, outputs only the activity that corresponds to the tracked objects. These outputs reduce the bandwidth needed to transfer information as well as further post-processing, since computation happens at the pixel level.