This study presents Poison-RAG, a framework for adversarial data poisoning attacks targeting retrieval-augmented generation (RAG)-based recommender systems. Poison-RAG manipulates item metadata, such as tags and descriptions, to influence recommendation outcomes. Using item metadata generated through a large language model (LLM) and embeddings derived via the OpenAI API, we explore the impact of adversarial poisoning attacks on provider-side, where attacks are designed to promote long-tail items and demote popular ones. Two attack strategies are proposed: local modifications, which personalize tags for each item using BERT embeddings, and global modifications, applying uniform tags across the dataset. Experiments conducted on the MovieLens dataset in a black-box setting reveal that local strategies improve manipulation effectiveness by up to 50%, while global strategies risk boosting already popular items. The results indicate that popular items are more susceptible to attacks, whereas long-tail items are harder to manipulate. Approximately 70% of items lack tags, presenting a cold-start challenge; data augmentation and synthesis are proposed as potential defense mechanisms to enhance RAG-based systems’ resilience. The findings emphasize the need for robust metadata management to safeguard recommendation frameworks. Code and data are available at https://github.com/atenanaz/Poison-RAG.

Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender Systems / Nazary, Fatemeh; Deldjoo, Yashar; Noia, Tommaso Di. - 15575 LNCS:(2025), pp. 239-251. ( 47th European Conference on Information Retrieval, ECIR 2025 ita 2025) [10.1007/978-3-031-88717-8_18].

Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender Systems

Nazary, Fatemeh;Deldjoo, Yashar;Noia, Tommaso di
2025

Abstract

This study presents Poison-RAG, a framework for adversarial data poisoning attacks targeting retrieval-augmented generation (RAG)-based recommender systems. Poison-RAG manipulates item metadata, such as tags and descriptions, to influence recommendation outcomes. Using item metadata generated through a large language model (LLM) and embeddings derived via the OpenAI API, we explore the impact of adversarial poisoning attacks on provider-side, where attacks are designed to promote long-tail items and demote popular ones. Two attack strategies are proposed: local modifications, which personalize tags for each item using BERT embeddings, and global modifications, applying uniform tags across the dataset. Experiments conducted on the MovieLens dataset in a black-box setting reveal that local strategies improve manipulation effectiveness by up to 50%, while global strategies risk boosting already popular items. The results indicate that popular items are more susceptible to attacks, whereas long-tail items are harder to manipulate. Approximately 70% of items lack tags, presenting a cold-start challenge; data augmentation and synthesis are proposed as potential defense mechanisms to enhance RAG-based systems’ resilience. The findings emphasize the need for robust metadata management to safeguard recommendation frameworks. Code and data are available at https://github.com/atenanaz/Poison-RAG.
2025
47th European Conference on Information Retrieval, ECIR 2025
9783031887161
9783031887178
Poison-RAG: Adversarial Data Poisoning Attacks on Retrieval-Augmented Generation in Recommender Systems / Nazary, Fatemeh; Deldjoo, Yashar; Noia, Tommaso Di. - 15575 LNCS:(2025), pp. 239-251. ( 47th European Conference on Information Retrieval, ECIR 2025 ita 2025) [10.1007/978-3-031-88717-8_18].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/292023
Citazioni
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact