The computational task of continuous-time state estimation, nonlinear filtering and identification, i.e.~parameter learning, poses a class of interesting problems, which mathematicians have been working on for over 50 years and which has received increasing attention in both machine-learning and neuroscience communities. Moreover, the question how Bayesian inference in general and nonlinear filtering in particular can be implemented in neuronal tissue might be a step towards understanding information processing in the brain. Yet possible answers to this question remain debated. Starting from the mathematical formalism of nonlinear filtering theory, we propose a stochastic rate-based network in terms of a stochastic differential equation whose activity samples the posterior dynamics. The underlying mathematical framework is flexible enough to additionally allow extensions to other tasks such as parameter learning. We show that the numerical performance of the model is adequate to account for both nonlinear filtering and identification problems. Our network may be implemented as a recurrent neuronal network in a biologically plausible manner and thus offers a concrete proposition of how neural sampling might be implemented in the brain.