Over the years, model-based approaches have shown their effectiveness in computing recommendation lists in different domains and settings. By relying on the computation of latent factors, they can recommend items with a very high level of accuracy. Unfortunately, when moving to the latent space, even if the model embeds content-based information, we miss references to the actual semantics of the recommended item. It makes the interpretation of the recommendation process non-trivial. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from knowledge graphs to train an interpretable model, which is, in turn, able to provide recommendations with a high level of accuracy. In the presented approach, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. By relying on the information encoded in the original knowledge graph, we also propose two metrics to evaluate the semantic accuracy and robustness of knowledge-aware interpretability. An extensive experimental evaluation on six different datasets shows the effectiveness of the interpretable model in terms of both accuracy and diversity of recommendation results and interpretability robustness.
Semantic Interpretation of Top-N Recommendations / Anelli, Vito Walter; Di Noia, Tommaso; Di Sciascio, Eugenio; Ragone, Azzurra; Trotta, Joseph. - In: IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING. - ISSN 1041-4347. - STAMPA. - 34:5(2022), pp. 2416-2428. [10.1109/TKDE.2020.3010215]
Semantic Interpretation of Top-N Recommendations
Vito Walter Anelli;Tommaso Di Noia;Eugenio Di Sciascio;
2022-01-01
Abstract
Over the years, model-based approaches have shown their effectiveness in computing recommendation lists in different domains and settings. By relying on the computation of latent factors, they can recommend items with a very high level of accuracy. Unfortunately, when moving to the latent space, even if the model embeds content-based information, we miss references to the actual semantics of the recommended item. It makes the interpretation of the recommendation process non-trivial. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from knowledge graphs to train an interpretable model, which is, in turn, able to provide recommendations with a high level of accuracy. In the presented approach, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. By relying on the information encoded in the original knowledge graph, we also propose two metrics to evaluate the semantic accuracy and robustness of knowledge-aware interpretability. An extensive experimental evaluation on six different datasets shows the effectiveness of the interpretable model in terms of both accuracy and diversity of recommendation results and interpretability robustness.File | Dimensione | Formato | |
---|---|---|---|
2022_Semantic_Interpretation_of_Top-N_Recommendations_pdfeditoriale.pdf
solo gestori catalogo
Tipologia:
Versione editoriale
Licenza:
Tutti i diritti riservati
Dimensione
620.36 kB
Formato
Adobe PDF
|
620.36 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.