Artificial intelligence (AI) is rapidly becoming the pivotal solution to support critical judgments in many life-changing decisions. In fact, a biased AI tool can be particularly harmful since these systems can contribute to or demote people's well-being. Consequently, government regulations are introducing specific rules to prohibit the use of sensitive features (e.g., gender, race, religion) in the algorithm's decision-making process to avoid unfair outcomes. Unfortunately, such restrictions may not be sufficient to protect people from unfair decisions as algorithms can still behave in a discriminatory manner. Indeed, even when sensitive features are omitted (fairness through unawareness), they could be somehow related to other features, named proxy features. This study shows how to unveil whether a black-box model, complying with the regulations, is still biased or not. We propose an end-to-end bias detection approach exploiting a counterfactual reasoning module and an external classifier for sensitive features. In detail, the counterfactual analysis finds the minimum cost variations that grant a positive outcome, while the classifier detects non-linear patterns of non-sensitive features that proxy sensitive characteristics. The experimental evaluation reveals the proposed method's efficacy in detecting classifiers that learn from proxy features. We also scrutinize the impact of state-of-the-art debiasing algorithms in alleviating the proxy feature problem.

Auditing fairness under unawareness through counterfactual reasoning / Cornacchia, Giandomenico; Anelli, Vito Walter; Biancofiore, Giovanni Maria; Narducci, Fedelucio; Pomo, Claudio; Ragone, Azzurra; Di Sciascio, Eugenio. - In: INFORMATION PROCESSING & MANAGEMENT. - ISSN 0306-4573. - STAMPA. - 60:2(2023). [10.1016/j.ipm.2022.103224]

Auditing fairness under unawareness through counterfactual reasoning

Cornacchia, Giandomenico;Anelli, Vito Walter;Biancofiore, Giovanni Maria;Narducci, Fedelucio;Pomo, Claudio;Di Sciascio, Eugenio
2023-01-01

Abstract

Artificial intelligence (AI) is rapidly becoming the pivotal solution to support critical judgments in many life-changing decisions. In fact, a biased AI tool can be particularly harmful since these systems can contribute to or demote people's well-being. Consequently, government regulations are introducing specific rules to prohibit the use of sensitive features (e.g., gender, race, religion) in the algorithm's decision-making process to avoid unfair outcomes. Unfortunately, such restrictions may not be sufficient to protect people from unfair decisions as algorithms can still behave in a discriminatory manner. Indeed, even when sensitive features are omitted (fairness through unawareness), they could be somehow related to other features, named proxy features. This study shows how to unveil whether a black-box model, complying with the regulations, is still biased or not. We propose an end-to-end bias detection approach exploiting a counterfactual reasoning module and an external classifier for sensitive features. In detail, the counterfactual analysis finds the minimum cost variations that grant a positive outcome, while the classifier detects non-linear patterns of non-sensitive features that proxy sensitive characteristics. The experimental evaluation reveals the proposed method's efficacy in detecting classifiers that learn from proxy features. We also scrutinize the impact of state-of-the-art debiasing algorithms in alleviating the proxy feature problem.
2023
Auditing fairness under unawareness through counterfactual reasoning / Cornacchia, Giandomenico; Anelli, Vito Walter; Biancofiore, Giovanni Maria; Narducci, Fedelucio; Pomo, Claudio; Ragone, Azzurra; Di Sciascio, Eugenio. - In: INFORMATION PROCESSING & MANAGEMENT. - ISSN 0306-4573. - STAMPA. - 60:2(2023). [10.1016/j.ipm.2022.103224]
File in questo prodotto:
File Dimensione Formato  
2023_Auditing_fairness_under_unawareness_through_counterfactual_reasoning_pdfeditoriale.pdf

solo gestori catalogo

Tipologia: Versione editoriale
Licenza: Tutti i diritti riservati
Dimensione 1.45 MB
Formato Adobe PDF
1.45 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/246923
Citazioni
  • Scopus 23
  • ???jsp.display-item.citation.isi??? 16
social impact