As the use of AI and ML models continues to grow, concerns about potential unfairness have become more prominent. Many researchers have focused on developing new definitions of fairness or identifying biased predictions, but these approaches have limited scope and fail to analyze the minimum changes in user characteristics required for positive outcomes (i.e. counterfactuals). In response, this proposed methodology aims to use counterfactual reasoning to identify unfair behaviours in the case of fairness under unawareness. Furthermore, counterfactual reasoning can serve as a comprehensive methodology for evaluating all the essential conditions for a reliable, responsible, and trustworthy model.

Counterfactual Reasoning for Responsible AI Assessment / Cornacchia, G.; Anelli, V. W.; Narducci, F.; Ragone, A.; Di Sciascio, E.. - 3486:(2023), pp. 347-352. (Intervento presentato al convegno 2023 Italia Intelligenza Artificiale - Thematic Workshops, Ital-IA 2023 tenutosi a ita nel 2023).

Counterfactual Reasoning for Responsible AI Assessment

Cornacchia G.;Anelli V. W.;Narducci F.;Di Sciascio E.
2023-01-01

Abstract

As the use of AI and ML models continues to grow, concerns about potential unfairness have become more prominent. Many researchers have focused on developing new definitions of fairness or identifying biased predictions, but these approaches have limited scope and fail to analyze the minimum changes in user characteristics required for positive outcomes (i.e. counterfactuals). In response, this proposed methodology aims to use counterfactual reasoning to identify unfair behaviours in the case of fairness under unawareness. Furthermore, counterfactual reasoning can serve as a comprehensive methodology for evaluating all the essential conditions for a reliable, responsible, and trustworthy model.
2023
2023 Italia Intelligenza Artificiale - Thematic Workshops, Ital-IA 2023
Counterfactual Reasoning for Responsible AI Assessment / Cornacchia, G.; Anelli, V. W.; Narducci, F.; Ragone, A.; Di Sciascio, E.. - 3486:(2023), pp. 347-352. (Intervento presentato al convegno 2023 Italia Intelligenza Artificiale - Thematic Workshops, Ital-IA 2023 tenutosi a ita nel 2023).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/262722
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact