In this paper, we give a new double twist to the robot localization problem. We solve the problem for the case of prior maps which are semantically annotated perhaps even sketched by hand. Data association is achieved not through the detection of visual features but the detection of object classes used in the annotation of the prior maps. To avoid the caveats of general object recognition, we propose a new representation of the query images that consists of a vector of the detection scores for each object class. Given such soft object detections we are able to create hypotheses about pose and to refine them through particle filtering. As opposed to small confined office and kitchen spaces, our experiment takes place in a large open urban rail station with multiple semantically ambiguous places. The success of our approach shows that our new representation is a robust way to exploit the plethora of existing prior maps for GPS-denied environments avoiding the data association problems when matching point clouds or visual features.