Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items, and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end-user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.

Knowledge-aware Interpretable Recommender Systems / Anelli, Vito Walter; Bellini, Vito; Di Noia, Tommaso; Di Sciascio, Eugenio. - STAMPA. - 47:(2020), pp. 101-104. [10.3233/SSW200014]

Knowledge-aware Interpretable Recommender Systems

Vito Walter Anelli;Vito Bellini;Tommaso Di Noia;Eugenio Di Sciascio
2020-01-01

Abstract

Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items, and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end-user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.
2020
Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges
978-1-64368-080-4
IOS Press
Knowledge-aware Interpretable Recommender Systems / Anelli, Vito Walter; Bellini, Vito; Di Noia, Tommaso; Di Sciascio, Eugenio. - STAMPA. - 47:(2020), pp. 101-104. [10.3233/SSW200014]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/216147
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact