Humans are able to process speech and other sounds effectively in adverse environments, hearing through noise, reverberation, and interference from other speakers. To date, machines have been unable to match human performance. One profound difference between biological and engineering systems comes at the input stage. In machines, an acoustic signal is typically chopped into short equally spaced segments in time. In biological systems, the cochlea outputs asynchronous spikes that respond in real-time to acoustic inputs. In this paper we describe a spiking cochlea implementation and recent experiments in both speaker and speech recognition that use spikes as input.