Deep learning classifiers are hugely vulnerable to adversarial examples, and their existence raised cybersecurity concerns in many tasks with an emphasis on malware detection, computer vision, and speech recognition. While there is a considerable effort to investigate attacks and defense strategies in these tasks, only limited work explores the influence of targeted attacks on input data (e.g., images, textual descriptions, audio) used in multimedia recommender systems (MR). In this work, we examine the consequences of applying targeted adversarial attacks against the product images of a visual-based MR. We propose a novel adversarial attack approach, called Target Adversarial Attack against Multimedia Recommender Systems (TAaMR), to investigate the modification of MR behavior when the images of a category of low recommended products (e.g., socks) are perturbed to misclassify the deep neural classifier towards the class of more recommended products (e.g., running shoes) with human-level slight images alterations. We explore the TAaMR approach studying the effect of two targeted adversarial attacks (i.e., FGSM and PGD) against input pictures of two state-of-the-art MR (i.e., VBPR and AMR). Extensive experiments on two real-world recommender fashion datasets confirmed the effectiveness of TAaMR in terms of recommendation lists changing while keeping the original human judgment on the perturbed images.

TAaMR: Targeted Adversarial Attack against Multimedia Recommender Systems / Di Noia, Tommaso; Malitesta, Daniele; Merra, Felice Antonio. - ELETTRONICO. - (2020). (Intervento presentato al convegno 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN-W 2020 tenutosi a Valencia, Spain nel June 29 - July 2, 2020) [10.1109/DSN-W50199.2020.00011].

TAaMR: Targeted Adversarial Attack against Multimedia Recommender Systems

Tommaso Di Noia;Daniele Malitesta;Felice Antonio Merra
2020-01-01

Abstract

Deep learning classifiers are hugely vulnerable to adversarial examples, and their existence raised cybersecurity concerns in many tasks with an emphasis on malware detection, computer vision, and speech recognition. While there is a considerable effort to investigate attacks and defense strategies in these tasks, only limited work explores the influence of targeted attacks on input data (e.g., images, textual descriptions, audio) used in multimedia recommender systems (MR). In this work, we examine the consequences of applying targeted adversarial attacks against the product images of a visual-based MR. We propose a novel adversarial attack approach, called Target Adversarial Attack against Multimedia Recommender Systems (TAaMR), to investigate the modification of MR behavior when the images of a category of low recommended products (e.g., socks) are perturbed to misclassify the deep neural classifier towards the class of more recommended products (e.g., running shoes) with human-level slight images alterations. We explore the TAaMR approach studying the effect of two targeted adversarial attacks (i.e., FGSM and PGD) against input pictures of two state-of-the-art MR (i.e., VBPR and AMR). Extensive experiments on two real-world recommender fashion datasets confirmed the effectiveness of TAaMR in terms of recommendation lists changing while keeping the original human judgment on the perturbed images.
2020
50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN-W 2020
978-1-7281-7263-7
TAaMR: Targeted Adversarial Attack against Multimedia Recommender Systems / Di Noia, Tommaso; Malitesta, Daniele; Merra, Felice Antonio. - ELETTRONICO. - (2020). (Intervento presentato al convegno 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN-W 2020 tenutosi a Valencia, Spain nel June 29 - July 2, 2020) [10.1109/DSN-W50199.2020.00011].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/216140
Citazioni
  • Scopus 33
  • ???jsp.display-item.citation.isi??? 21
social impact