One of the most interesting success stories of Artificial Intelligence (AI) is the employment of recommender systems (RSs) by companies like Netflix, YouTube, and Amazon. The recommendation systems have evolved over time due to ongoing research in this field, and they are now frequently capable of making astoundingly effective suggestions. One could assume that the recommendation dilemma is almost resolved, given the sheer number of articles published each year. However, there are new challenges that this field has been facing in recent years. Specifically, it involves facing all the sociological, cognitive, and legislative challenges that have come to the fore in the last period. Hot topics in this area of interest are the quality of new and old models in terms of accuracy and the dimensions, such as exposure to the bias of specific groups of items or users. Also of great importance are the transparency characteristics of the models for the suggestions they propose: the model must be interpretable and capable of explaining the recommendation given to the user. How can we independently evaluate the performance of models for recommender systems? Is there a benchmarking base against which we can compare? What metrics need to be considered in a fair recommendation context? Are there metrics suitable for measuring the quality of an explanation? What techniques are best suited to address this challenge of transparency to the consumer? This dissertation intends to outline a path about the issues of responsible AI in the context of recommender systems, starting with the problems about reproducibility and benchmarking and then addressing the issue of model performance with respect to metrics beyond accuracy and how this analysis is critically important in the context of consumer experience. Finally, the topic of explainable recommendation techniques and what impact these have in terms of performance and user experience will be addressed. Starting from the recent academic literature in the area, this thesis will intertwine with the issues close to the field of responsible AI by offering insights in the following directions: (i) we propose an unambiguous framework for recommender systems that is the core of common approach for benchmarking;(ii) we measure and compare various collaborative filtering models for recommender systems in terms of accuracy- and over-accuracy-based metrics and their trade-offs; (iii) we improve user experience through non-trivial and explainable recommendations.

Towards Responsible AI in Recommender Systems / Pomo, Claudio. - ELETTRONICO. - (2023). [10.60576/poliba/iris/pomo-claudio_phd2023]

Towards Responsible AI in Recommender Systems

Pomo, Claudio
2023-01-01

Abstract

One of the most interesting success stories of Artificial Intelligence (AI) is the employment of recommender systems (RSs) by companies like Netflix, YouTube, and Amazon. The recommendation systems have evolved over time due to ongoing research in this field, and they are now frequently capable of making astoundingly effective suggestions. One could assume that the recommendation dilemma is almost resolved, given the sheer number of articles published each year. However, there are new challenges that this field has been facing in recent years. Specifically, it involves facing all the sociological, cognitive, and legislative challenges that have come to the fore in the last period. Hot topics in this area of interest are the quality of new and old models in terms of accuracy and the dimensions, such as exposure to the bias of specific groups of items or users. Also of great importance are the transparency characteristics of the models for the suggestions they propose: the model must be interpretable and capable of explaining the recommendation given to the user. How can we independently evaluate the performance of models for recommender systems? Is there a benchmarking base against which we can compare? What metrics need to be considered in a fair recommendation context? Are there metrics suitable for measuring the quality of an explanation? What techniques are best suited to address this challenge of transparency to the consumer? This dissertation intends to outline a path about the issues of responsible AI in the context of recommender systems, starting with the problems about reproducibility and benchmarking and then addressing the issue of model performance with respect to metrics beyond accuracy and how this analysis is critically important in the context of consumer experience. Finally, the topic of explainable recommendation techniques and what impact these have in terms of performance and user experience will be addressed. Starting from the recent academic literature in the area, this thesis will intertwine with the issues close to the field of responsible AI by offering insights in the following directions: (i) we propose an unambiguous framework for recommender systems that is the core of common approach for benchmarking;(ii) we measure and compare various collaborative filtering models for recommender systems in terms of accuracy- and over-accuracy-based metrics and their trade-offs; (iii) we improve user experience through non-trivial and explainable recommendations.
2023
ecommender systems; Personalization; Responsible AI; Reproducibility;
Towards Responsible AI in Recommender Systems / Pomo, Claudio. - ELETTRONICO. - (2023). [10.60576/poliba/iris/pomo-claudio_phd2023]
File in questo prodotto:
File Dimensione Formato  
35 ciclo-POMO Claudio.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 4.49 MB
Formato Adobe PDF
4.49 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/246681
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact