Rehabilitation robots physically support patients during exercise, but their assistive strategies often constrain patients by forcing them to execute predefined motions. To allow more freedom during rehabilitation, the robot should be able to predict what motion the patient wants to perform, then intelligently support the motion. As a first step, this paper presents an algorithm that predicts targets of reaching motions made with an arm rehabilitation exoskeleton. Different sensing modalities are compared with regard to their predictive abilities: arm kinematics, eye tracking, contextual information, and combinations of these modalities. Supervised machine learning is used to make predictions at different points of time during the motion. Results of offline crossvalidation using 12 healthy subjects show that eye tracking can make target predictions earlier and more accurately than arm kinematics, especially when possible targets are close together. Combining eye tracking with contextual information further improves prediction accuracy. The foreseen next step is to use our predictions to guide the rehabilitation robot, and then test the algorithm in real-time with stroke patients.