Artificial Intelligence (AI) has been shown to productively affect organizational decision making, in terms of returned economic value. In particular, agile business may significantly benefit from the ability of AI systems to constantly pursue contextual knowledge awareness. Undoubtedly, a key added value of such systems is the ability to explain results. In fact, users are more inclined to trust and feel the accountability of systems, when the output is returned together with a human-readable explanation. Nevertheless, some of the information in an explanation might be irrelevant to users—despite its truthfulness. This paper discusses the relevance of explanation for resources similarity, provided by AI systems. In particular, the analysis focuses on one system based on Large Language Models (LLMs)—namely ChatGPT— and on one logic-based tool relying on the computation of the Least Common Subsumer in the Resource Description Framework (RDF). This discussion reveals the need for a formal distinction between relevant and irrelevant information, that we try to answer with a definition of relevance amenable to implementation.

On the Relevance of Explanation for RDF Resources Similarity / Colucci, S.; Donini, F. M.; Di Sciascio, E.. - 488:(2023), pp. 96-107. (Intervento presentato al convegno 3rd International workshop on Model-driven Organizational and Business Agility, MOBA 2023 tenutosi a esp nel 2023) [10.1007/978-3-031-45010-5_8].

On the Relevance of Explanation for RDF Resources Similarity

Colucci S.;Donini F. M.;Di Sciascio E.
2023-01-01

Abstract

Artificial Intelligence (AI) has been shown to productively affect organizational decision making, in terms of returned economic value. In particular, agile business may significantly benefit from the ability of AI systems to constantly pursue contextual knowledge awareness. Undoubtedly, a key added value of such systems is the ability to explain results. In fact, users are more inclined to trust and feel the accountability of systems, when the output is returned together with a human-readable explanation. Nevertheless, some of the information in an explanation might be irrelevant to users—despite its truthfulness. This paper discusses the relevance of explanation for resources similarity, provided by AI systems. In particular, the analysis focuses on one system based on Large Language Models (LLMs)—namely ChatGPT— and on one logic-based tool relying on the computation of the Least Common Subsumer in the Resource Description Framework (RDF). This discussion reveals the need for a formal distinction between relevant and irrelevant information, that we try to answer with a definition of relevance amenable to implementation.
2023
3rd International workshop on Model-driven Organizational and Business Agility, MOBA 2023
978-3-031-45009-9
978-3-031-45010-5
On the Relevance of Explanation for RDF Resources Similarity / Colucci, S.; Donini, F. M.; Di Sciascio, E.. - 488:(2023), pp. 96-107. (Intervento presentato al convegno 3rd International workshop on Model-driven Organizational and Business Agility, MOBA 2023 tenutosi a esp nel 2023) [10.1007/978-3-031-45010-5_8].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/262080
Citazioni
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact