Recent work on developing training methods for reduced precision Deep Convolutional Networks show that these networks can perform with similar accuracy to full precision networks when tested on a classification task. Reduced precision networks decrease the demand on the memory and computational power capabilities of the computing platform. This paper investigates the impact of reduced precision deep Recurrent Neural Networks (RNNs) when trained on a regression task, in this case, a monaural source separation task. The effect of reduced precision nets is explored for two popular recurrent network architectures: Vanilla RNNs and RNNs using Long-Short Term Memory (LSTM) units. The results show that the performance of the networks as measured by blind source separation metrics and speech intelligibility tests on two datasets, show very little decrease even when the weight precision goes down to 4 bits.