Human pose estimation has dramatically improved thanks to the continuous developments in deep learning. However, marker-free human pose estimation based on standard frame-based cameras is still slow and power hun- gry for real-time feedback interaction because of the huge number of operations necessary for large Convolutional Neural Network (CNN) inference. Event-based cameras such as the Dynamic Vision Sensor (DVS) quickly output sparse moving-edge information. Their sparse and rapid output is ideal for driving low-latency CNNs, thus poten- tially allowing real-time interaction for human pose estima- tors. Although the application of CNNs to standard frame- based cameras for human pose estimation is well estab- lished, their application to event-based cameras is still un- der study. This paper proposes a novel benchmark dataset of human body movements, the Dynamic Vision Sensor Hu- man Pose dataset (DHP19). It consists of recordings from 4 synchronized 346x260 pixel DVS cameras, for a set of 33 movements with 17 subjects. DHP19 also includes a 3D pose estimation model that achieves an average 3D pose estimation error of about 8 cm, despite the sparse and re- duced input data from the DVS.