Header

UZH-Logo

Maintenance Infos

On the Moral Justification of Statistical Parity


Hertweck, Corinna; Heitz, Christoph; Loi, Michele (2021). On the Moral Justification of Statistical Parity. In: FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada, 3 March 2021 - 10 March 2021. ACM Digital library, 747-757.

Abstract

A crucial but often neglected aspect of algorithmic fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective. When fairness metrics are proposed, they are typically argued for by highlighting their mathematical properties. Rarely are the moral assumptions beneath the metric explained. Our aim in this paper is to consider the moral aspects associated with the statistical fairness criterion of independence (statistical parity). To this end, we consider previous work, which discusses the two worldviews "What You See Is What You Get" (WYSIWYG) and "We're All Equal" (WAE) and by doing so provides some guidance for clarifying the possible assumptions in the design of algorithms. We present an extension of this work, which centers on morality. The most natural moral extension is that independence needs to be fulfilled if and only if differences in predictive features (e.g. high school grades and standardized test scores are predictive of performance at university) between socio-demographic groups are caused by unjust social disparities or measurement errors. Through two counterexamples, we demonstrate that this extension is not universally true. This means that the question of whether independence should be used or not cannot be satisfactorily answered by only considering the justness of differences in the predictive features.

Abstract

A crucial but often neglected aspect of algorithmic fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective. When fairness metrics are proposed, they are typically argued for by highlighting their mathematical properties. Rarely are the moral assumptions beneath the metric explained. Our aim in this paper is to consider the moral aspects associated with the statistical fairness criterion of independence (statistical parity). To this end, we consider previous work, which discusses the two worldviews "What You See Is What You Get" (WYSIWYG) and "We're All Equal" (WAE) and by doing so provides some guidance for clarifying the possible assumptions in the design of algorithms. We present an extension of this work, which centers on morality. The most natural moral extension is that independence needs to be fulfilled if and only if differences in predictive features (e.g. high school grades and standardized test scores are predictive of performance at university) between socio-demographic groups are caused by unjust social disparities or measurement errors. Through two counterexamples, we demonstrate that this extension is not universally true. This means that the question of whether independence should be used or not cannot be satisfactorily answered by only considering the justness of differences in the predictive features.

Statistics

Citations

Dimensions.ai Metrics
22 citations in Web of Science®
28 citations in Scopus®
Google Scholar™

Altmetrics

Downloads

26 downloads since deposited on 31 Jan 2022
19 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Scopus Subject Areas:Social Sciences & Humanities > General Business, Management and Accounting
Uncontrolled Keywords:fairness, independence, statistical parity, distributive justice, bias
Scope:Discipline-based scholarship (basic research)
Language:English
Event End Date:10 March 2021
Deposited On:31 Jan 2022 06:14
Last Modified:06 Mar 2024 14:36
Publisher:ACM Digital library
OA Status:Green
Publisher DOI:https://doi.org/10.1145/3442188.3445936
Other Identification Number:merlin-id:21978
Project Information:
  • : FunderSNSF
  • : Grant ID407740_187473
  • : Project TitleSocially acceptable AI and fairness trade-offs in predictive analytics
  • : Project Websitehttps://fair-ai.ch/
  • Content: Published Version