Modern wearable robots are not yet intelligent enough to fully satisfy the demands of end-users, as they lack the sensor fusion algorithms needed to provide optimal assistance and react quickly to perturbations or changes in user intentions. Sensor fusion applications such as intention detection have been emphasized as a major challenge for both robotic orthoses and prostheses. In order to better examine the strengths and shortcomings of the field, this paper presents a review of existing sensor fusion methods for wearable robots, both stationary ones such as rehabilitation exoskeletons and portable ones such as active prostheses and full-body exoskeletons. Fusion methods are first presented as applied to individual sensing modalities (primarily electromyography, electroencephalography and mechanical sensors), and then four approaches to combining multiple modalities are presented. The strengths and weaknesses of the different methods are compared, and recommendations are made for future sensor fusion research.