Recommender Systems have shown to be an effective way to alleviate the over-choice problem and provide accurate and tailored recommendations. However, the impressive number of proposed recommendation algorithms, splitting strategies, evaluation protocols, metrics, and tasks, has made rigorous experimental evaluation particularly challenging. Puzzled and frustrated by the continuous recreation of appropriate evaluation benchmarks, experimental pipelines, hyperparameter optimization, and evaluation procedures, we have developed an exhaustive framework to address such needs. Elliot is a comprehensive recommendation framework that aims to run and reproduce an entire experimental pipeline by processing a simple configuration file. The framework loads, filters, and splits the data considering a vast set of strategies (13 splitting methods and 8 filtering approaches, from temporal training-test splitting to nested K-folds Cross-Validation). Elliot(https://github.com/sisinflab/elliot) optimizes hyperparameters (51 strategies) for several recommendation algorithms (50), selects the best models, compares them with the baselines providing intra-model statistics, computes metrics (36) spanning from accuracy to beyond-accuracy, bias, and fairness, and conducts statistical analysis (Wilcoxon and Paired t-test).
Elliot: A Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation / Anelli, Vito Walter; Bellogin, Alejandro; Ferrara, Antonio; Malitesta, Daniele; Merra, Felice Antonio; Pomo, Claudio; Maria Donini, Francesco; Di Noia, Tommaso. - STAMPA. - (2021), pp. 3463245.2405-3463245.2414. (Intervento presentato al convegno 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2021 tenutosi a Virtual (Canada) nel July 11-15, 2021) [10.1145/3404835.3463245].
Elliot: A Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation
Vito Walter Anelli;Antonio Ferrara;Daniele Malitesta;Felice Antonio Merra;Claudio Pomo;Tommaso Di Noia
2021-01-01
Abstract
Recommender Systems have shown to be an effective way to alleviate the over-choice problem and provide accurate and tailored recommendations. However, the impressive number of proposed recommendation algorithms, splitting strategies, evaluation protocols, metrics, and tasks, has made rigorous experimental evaluation particularly challenging. Puzzled and frustrated by the continuous recreation of appropriate evaluation benchmarks, experimental pipelines, hyperparameter optimization, and evaluation procedures, we have developed an exhaustive framework to address such needs. Elliot is a comprehensive recommendation framework that aims to run and reproduce an entire experimental pipeline by processing a simple configuration file. The framework loads, filters, and splits the data considering a vast set of strategies (13 splitting methods and 8 filtering approaches, from temporal training-test splitting to nested K-folds Cross-Validation). Elliot(https://github.com/sisinflab/elliot) optimizes hyperparameters (51 strategies) for several recommendation algorithms (50), selects the best models, compares them with the baselines providing intra-model statistics, computes metrics (36) spanning from accuracy to beyond-accuracy, bias, and fairness, and conducts statistical analysis (Wilcoxon and Paired t-test).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.