We present a study on human perception of map complexity, with the objective of better understanding design decisions that may lead to undesirable levels of complexity in web maps. We compare three complexity metrics to human ratings of complexity obtained through a user survey. Specifically, we use two algorithmic approaches published by others, which measure feature congestion (FC) and subband entropy (SE), as well as our own approach of counting object types rather than individual objects. We compare these metrics with each other as well as with human complexity ratings for three maps of the same area from map providers Google Maps, Bing Maps, and OpenStreetMap. Each map design is assessed at three different scales (levels of detail). We find that (1) the FC and SE metrics appear to be adequate predictors of what humans consider complex; (2) object-type counts are slightly less successful at predicting human-rated complexity, implying that clutter is more important in perceived complexity than diversity of symbology; and (3) generalization choices do impact human complexity ratings. These findings contribute to our understanding of what makes a map complex, with implications for designing maps that are easy to use.