This paper describes spike-based neural networks for optical flow and stereo estimation from Dynamic Vision Sensors data. These methods combine the Asynchronous Time-based Image Sensor with the SpiNNaker platform. The sensor generates spikes with sub-millisecond resolution in response to scene illumination changes. These spike are processed by a spiking neural network running on SpiNNaker with a 1 millisecond resolution to accurately determine the order and time difference of spikes from neighboring pixels, and therefore infer the velocity, direction or depth. The spiking neural networks are a variant of the Barlow-Levick method for optical flow estimation, and Marr& Poggio for the stereo matching.