Land-cover and land-use semantic labeling in centimeter resolution imagery (ultra-high resolution) is mostly performed by supervised classification of informative descriptors extracted from spatially coherent but small objects (e.g. superpixels or patches). In this paper, we propose an extension of this reasoning by proposing a class-specific, multi-scale and bottom-up object proposal strategy to perform semantic labeling. Specifically, we rely on a fully trainable boundary (edge) detector, allowing us to extract class-specific object-proposals. Such proposals enable training rich appearance and object models as well as enhanced spatial reasoning. We evaluate the proposed strategy on the Vaihingen dataset with promising results.