Header

UZH-Logo

Maintenance Infos

Reasoning cartographic knowledge in deep learning-based map generalization with explainable AI


Fu, Cheng; Zhou, Zhiyong; Xin, Yanan; Weibel, Robert (2024). Reasoning cartographic knowledge in deep learning-based map generalization with explainable AI. International Journal of Geographical Information Science:Epub ahead of print.

Abstract

Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks.

Abstract

Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks.

Statistics

Citations

Altmetrics

Downloads

3 downloads since deposited on 03 Jul 2024
3 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Geography
Dewey Decimal Classification:910 Geography & travel
Scopus Subject Areas:Physical Sciences > Information Systems
Social Sciences & Humanities > Geography, Planning and Development
Social Sciences & Humanities > Library and Information Sciences
Language:English
Date:20 June 2024
Deposited On:03 Jul 2024 14:19
Last Modified:04 Jul 2024 20:00
Publisher:Taylor & Francis
ISSN:1365-8816
OA Status:Green
Publisher DOI:https://doi.org/10.1080/13658816.2024.2369535
Project Information:
  • : FunderSwiss National Science Foundation
  • : Grant ID
  • : Project Title
  • Content: Published Version
  • Language: English
  • Licence: Creative Commons: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)