Abstract
This chapter reviews neuromorphic silicon retinas and cochleas that are based on the structure and operation of their biological counterparts. These devices are built using conventional chip fabrication technologies, using transistor circuits that emulate neural computations from biology. In first generation sensors, the analog outputs of every cell were read out serially at fixed sample rates. The new generation of sensors reports only the outputs of active cells through digital events (spikes) that are communicated asynchronously. Such sensors respond more quickly with reduced power consumption. Their digital “address-event” outputs rapidly convey precise timing information about the scene that is only attained from conventional sensors if they are continuously sampled at high rates. The sparseness, low latency, and spatio-temporal structure of this new form of sensor output data can benefit subsequent post-processing algorithms. Tradeoffs in the design of neuromorphic visual and auditory sensors are discussed. Examples are given of vision algorithms that process the address-events, using their spatio-temporal coherence, for low-level feature extraction and for object tracking.