Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items, and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end-user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.
Knowledge-aware Interpretable Recommender Systems / Anelli, Vito Walter; Bellini, Vito; Di Noia, Tommaso; Di Sciascio, Eugenio (STUDIES ON THE SEMANTIC WEB). - In: Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges / [a cura di] Ilaria Tiddi, Freddy Lécué, Pascal Hitzler. - STAMPA. - Amsterdam, The Netherlands : IOS Press, 2020. - ISBN 978-1-64368-080-4. - pp. 101-104 [10.3233/SSW200014]
Knowledge-aware Interpretable Recommender Systems
Vito Walter Anelli;Vito Bellini;Tommaso Di Noia;Eugenio Di Sciascio
2020-01-01
Abstract
Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items, and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end-user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.