In biological experiments fluorescence imaging is used to image living and stimulated neurons. But the analysis of fluorescence images is a difficult task. It is not possible to conclude the shape of an object from fluorescence images alone. Therefore, it is not feasible to get good manual segmented nor ground truth data from fluorescence images. Supervised learning approaches are not possible without training data. To overcome this issues we propose to synthesize fluorescence images and call them 'Digitally Reconstructed Fluorescence Images'(DRFI). We propose how DRFIs are computed with data from ’Serial Block-Face Scanning Electron Microscopy’ (SBFS-EM). As novelty, we use DRFIs to learn a distribution model of dendrite intensities and apply it to classify pixels into spine and non-spine pixels. By using DRFIs as test data we also have the ground truth of spine and non-spine pixels and can validate the results. With DRFIs supervised learning of fluorescence images is feasible.