Abstract
It is widely agreed that amputees have to rely on visual input to monitor and control the position of the prosthesis while reaching and grasping because of the lack of proprioceptive feedback. Therefore, visual information has been a prerequisite for prosthetic hand biofeedback studies. This is why, the underlying characteristics of other artificial feedback methods used to this day, such as auditive, electro-tactile, or vibro-tactile feedback, has not been clearly explored. The purpose of this paper is to explore whether it is possible to use audio feedback alone to convey more than one independent variable (multichannel) simultaneously, without relying on the vision, to improve the learning of a new perceptions, in this case, to learn and understand the artificial proprioception of a prosthetic hand while reaching.
Experiments are conducted to determine whether the audio signals could be used as a multi-variable dynamical sensory substitution in reaching movements without relying on the visual input. Two different groups are tested, the first one uses only audio information and the second one uses only visual information to convey computer-simulated trajectories of two fingers.