Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation process. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. With our model, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. The accuracy and effectiveness of the trained model have been tested using two well-known recommender systems datasets. By relying on the information encoded in the original knowledge graph, we have also evaluated the semantic accuracy and robustness for the knowledge-aware interpretability of the final model.

How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs / Anelli, Vito Walter; Di Noia, Tommaso; Di Sciascio, Eugenio; Ragone, Azzurra; Trotta, Joseph. - STAMPA. - 11778:(2019), pp. 38-56. (Intervento presentato al convegno 18th International Semantic Web Conference, ISWC 2019 tenutosi a Auckland, New Zealand nel October 26–30, 2019) [10.1007/978-3-030-30793-6_3].

How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs

Vito Walter Anelli;Tommaso Di Noia;Eugenio Di Sciascio;
2019-01-01

Abstract

Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation process. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. With our model, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. The accuracy and effectiveness of the trained model have been tested using two well-known recommender systems datasets. By relying on the information encoded in the original knowledge graph, we have also evaluated the semantic accuracy and robustness for the knowledge-aware interpretability of the final model.
2019
18th International Semantic Web Conference, ISWC 2019
978-3-030-30792-9
How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs / Anelli, Vito Walter; Di Noia, Tommaso; Di Sciascio, Eugenio; Ragone, Azzurra; Trotta, Joseph. - STAMPA. - 11778:(2019), pp. 38-56. (Intervento presentato al convegno 18th International Semantic Web Conference, ISWC 2019 tenutosi a Auckland, New Zealand nel October 26–30, 2019) [10.1007/978-3-030-30793-6_3].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/203377
Citazioni
  • Scopus 38
  • ???jsp.display-item.citation.isi??? 28
social impact