During the past 60 years scientific research proposed many techniques to control robotic hand prostheses with surface electromyography (sEMG). Few of them have been implemented in commercial systems also due to limited robustness that may be improved with multimodal data. This paper presents the first acquisition setup, acquisition protocol and dataset including sEMG, eye tracking and computer vision to study robotic hand control. A data analysis on healthy controls gives a first idea of the capabilities and constraints of the acquisition procedure that will be applied to amputees in a next step. Different data sources are not fused together in the analysis. Nevertheless, the results support the use of the proposed multimodal data acquisition approach for prosthesis control. The sEMG movement classification results confirm that it is possible to classify several grasps with sEMG alone. sEMG can detect the grasp type and also small differences in the grasped object (accuracy: 95%). The simultaneous recording of eye tracking and scene camera data shows that these sensors allow performing object detection for grasp selection and that several neurocognitive parameters need to be taken into account for this. In conclusion, this work on intact subjects presents an innovative acquisition setup and protocol. The first results in terms of data analysis are promising and set the basis for future work on amputees, aiming to improve the robustness of prostheses with multimodal data.