Artificial Intelligence (AI) has been shown to productively affect organizational decision making, in terms of returned economic value. In particular, agile business may significantly benefit from the ability of AI systems to constantly pursue contextual knowledge awareness. Undoubtedly, a key added value of such systems is the ability to explain results. In fact, users are more inclined to trust and feel the accountability of systems, when the output is returned together with a human-readable explanation. Nevertheless, some of the information in an explanation might be irrelevant to users—despite its truthfulness. This paper discusses the relevance of explanation for resources similarity, provided by AI systems. In particular, the analysis focuses on one system based on Large Language Models (LLMs)—namely ChatGPT— and on one logic-based tool relying on the computation of the Least Common Subsumer in the Resource Description Framework (RDF). This discussion reveals the need for a formal distinction between relevant and irrelevant information, that we try to answer with a definition of relevance amenable to implementation.

On the Relevance of Explanation for RDF Resources Similarity / Colucci, Simona; Donini, Francesco M.; Di Sciascio, Eugenio. - STAMPA. - 488:(2023), pp. 96-107. (Intervento presentato al convegno 3rd International workshop on Model-driven Organizational and Business Agility, MOBA 2023 tenutosi a Zaragoza, Spain nel June 12-13, 2023) [10.1007/978-3-031-45010-5_8].

On the Relevance of Explanation for RDF Resources Similarity

Simona Colucci;Eugenio Di Sciascio
2023-01-01

Abstract

Artificial Intelligence (AI) has been shown to productively affect organizational decision making, in terms of returned economic value. In particular, agile business may significantly benefit from the ability of AI systems to constantly pursue contextual knowledge awareness. Undoubtedly, a key added value of such systems is the ability to explain results. In fact, users are more inclined to trust and feel the accountability of systems, when the output is returned together with a human-readable explanation. Nevertheless, some of the information in an explanation might be irrelevant to users—despite its truthfulness. This paper discusses the relevance of explanation for resources similarity, provided by AI systems. In particular, the analysis focuses on one system based on Large Language Models (LLMs)—namely ChatGPT— and on one logic-based tool relying on the computation of the Least Common Subsumer in the Resource Description Framework (RDF). This discussion reveals the need for a formal distinction between relevant and irrelevant information, that we try to answer with a definition of relevance amenable to implementation.
2023
3rd International workshop on Model-driven Organizational and Business Agility, MOBA 2023
978-3-031-45009-9
On the Relevance of Explanation for RDF Resources Similarity / Colucci, Simona; Donini, Francesco M.; Di Sciascio, Eugenio. - STAMPA. - 488:(2023), pp. 96-107. (Intervento presentato al convegno 3rd International workshop on Model-driven Organizational and Business Agility, MOBA 2023 tenutosi a Zaragoza, Spain nel June 12-13, 2023) [10.1007/978-3-031-45010-5_8].
File in questo prodotto:
File Dimensione Formato  
2023_On_the_Relevance_of_Explanation_for_RDF_Resources_Similarity_preprint.pdf

accesso aperto

Tipologia: Documento in Pre-print
Licenza: Tutti i diritti riservati
Dimensione 368.53 kB
Formato Adobe PDF
368.53 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/262080
Citazioni
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact