Over the years, model-based approaches have shown their effectiveness in computing recommendation lists in different domains and settings. By relying on the computation of latent factors, they can recommend items with a very high level of accuracy. Unfortunately, when moving to the latent space, even if the model embeds content-based information, we miss references to the actual semantics of the recommended item. It makes the interpretation of the recommendation process non-trivial. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from knowledge graphs to train an interpretable model, which is, in turn, able to provide recommendations with a high level of accuracy. In the presented approach, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. By relying on the information encoded in the original knowledge graph, we also propose two metrics to evaluate the semantic accuracy and robustness of knowledge-aware interpretability. An extensive experimental evaluation on six different datasets shows the effectiveness of the interpretable model in terms of both accuracy and diversity of recommendation results and interpretability robustness.

Semantic Interpretation of Top-N Recommendations / Anelli, Vito Walter; Di Noia, Tommaso; Di Sciascio, Eugenio; Ragone, Azzurra; Trotta, Joseph. - In: IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING. - ISSN 1041-4347. - ELETTRONICO. - (2022). [10.1109/TKDE.2020.3010215]

Semantic Interpretation of Top-N Recommendations

Vito Walter Anelli;Tommaso Di Noia;Eugenio Di Sciascio;
2022-01-01

Abstract

Over the years, model-based approaches have shown their effectiveness in computing recommendation lists in different domains and settings. By relying on the computation of latent factors, they can recommend items with a very high level of accuracy. Unfortunately, when moving to the latent space, even if the model embeds content-based information, we miss references to the actual semantics of the recommended item. It makes the interpretation of the recommendation process non-trivial. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from knowledge graphs to train an interpretable model, which is, in turn, able to provide recommendations with a high level of accuracy. In the presented approach, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. By relying on the information encoded in the original knowledge graph, we also propose two metrics to evaluate the semantic accuracy and robustness of knowledge-aware interpretability. An extensive experimental evaluation on six different datasets shows the effectiveness of the interpretable model in terms of both accuracy and diversity of recommendation results and interpretability robustness.
2022
Semantic Interpretation of Top-N Recommendations / Anelli, Vito Walter; Di Noia, Tommaso; Di Sciascio, Eugenio; Ragone, Azzurra; Trotta, Joseph. - In: IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING. - ISSN 1041-4347. - ELETTRONICO. - (2022). [10.1109/TKDE.2020.3010215]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/216149
Citazioni
  • Scopus 18
  • ???jsp.display-item.citation.isi??? 11
social impact