Latent-factor models (LFM) based on collaborative filtering (CF), such as matrix factorization (MF) and deepCF methods, are widely used in modern recommender systems (RS) due to their excellent performance and recommendation accuracy. However, success has been accompanied with a major new arising challenge: many applications of machine learning (ML) are adversarial in nature[146]. In recent years, it has been shown that these methods are vulnerable to adversarial examples, i.e., subtle but non-random perturbations designed to force recommendation models to produce erroneous outputs. The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML)for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions. In this survey, we provide an exhaustive literature review of 76 articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.
A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks / Deldjoo, Yashar; Di Noia, Tommaso; Merra, Felice Antonio. - In: ACM COMPUTING SURVEYS. - ISSN 0360-0300. - STAMPA. - 54:2(2021). [10.1145/3439729]
A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks
Yashar Deldjoo;Tommaso Di Noia;Felice Antonio Merra
2021-01-01
Abstract
Latent-factor models (LFM) based on collaborative filtering (CF), such as matrix factorization (MF) and deepCF methods, are widely used in modern recommender systems (RS) due to their excellent performance and recommendation accuracy. However, success has been accompanied with a major new arising challenge: many applications of machine learning (ML) are adversarial in nature[146]. In recent years, it has been shown that these methods are vulnerable to adversarial examples, i.e., subtle but non-random perturbations designed to force recommendation models to produce erroneous outputs. The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML)for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions. In this survey, we provide an exhaustive literature review of 76 articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.File | Dimensione | Formato | |
---|---|---|---|
2021_A_Survey_on_Adversarial_Recommender_Systems:_From_Attack/Defense_Strategies_to_Generative_Adversarial_Networks_pdfeditoriale.pdf
solo gestori catalogo
Tipologia:
Versione editoriale
Licenza:
Tutti i diritti riservati
Dimensione
1.94 MB
Formato
Adobe PDF
|
1.94 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.