Supervised, unsupervised, and reinforcement learning (RL) mechanisms are known as the most powerful learning paradigms empowering neuromorphic systems. These systems typically take advantage of unsupervised learning because they can learn the distribution of sensory information. However, to perform a task, not only is it important to have sensory information, but also it is required to have information about the context in which the system is operating. In this sense, reinforcement learning is very powerful for interacting with the environment while performing a context-dependent task. The predominant motivation for this brief is to present a digital architecture for a spiking neural network (SNN) model with RL capability suitable for learning a context-dependent task. The proposed architecture is composed of hardware-friendly leaky integrate-and-firing (LIF) neurons and spike timing dependent plasticity (STDP)-based synapses implemented on a field programmable gate array (FPGA). Hardware synthesis and physical implementations show that the resulting circuits can faithfully reproduce the outcome of a learning task previously performed in both animal experimentation and computational modelings. Compared to the state-of-the-art neuromorphic FPGA circuits with context-dependent learning capability, our circuit fires 10.7 times fewer spikes, which accelerates learning 15 times, while requiring 16 times less energy. This is a significant step in achieving fast and low-energy SNNs with context-dependent learning ability on FPGAs.