Training neural networks with low-resolution synaptic weights raised much interest recently and inference in neural networks with binary activation and binary weights has been shown to be able to achieve near state-of-the-art performance in a wide range of tasks. However, the current methods for training such networks rely on high-resolution gradients or update probabilities. Low resolution training methods would be useful for neuromorphic architectures that support lower power hardware implementations as well as emerging memory technologies based on memristive devices that do not always support fine-grained state changes. In this paper, we propose a training method, Randomized Unregulated Step Descent (RUSD), as an alternative to gradient descent that uses only a single bit of information about the gradient; we show how it is compatible with low-resolution integer arithmetic platforms and is resilient to some of the prominent non-idealities of memristive memories. We verify the performance of RUSD several standard machine-learning benchmarks.