The evaluation of e-learning applications deserves special attention and evaluators need effective methodologies and appropriate guidelines to perform their task. We have proposed a methodology, called eLSE (e-Learning Systematic Evaluation), which combines a specific inspection technique with user-testing. This inspection aims at allowing inspectors that may not have a wide experience in evaluating e-learning systems to perform accurate evaluations. It is based on the use of evaluation patterns, called Abstract Tasks (ATs), which precisely describe the activities to be performed during inspection. For this reason, it is called AT inspection. In this paper, we present an empirical validation of the AT inspection technique: three groups of novice inspectors evaluated a commercial e-learning system applying the AT inspection, the heuristic inspection, or user-testing. Results have shown an advantage of the AT inspection over the other two usability evaluation methods, demonstrating that Abstract Tasks are effective and efficient tools to drive evaluators and improve their performance. Important methodological considerations on the reliability of usability evaluation techniques are discussed

Systematic Evaluation of e-Learning Systems: an Experimental Validation / Ardito, C.; Costabile, M. F.; De Angeli, A.; Lanzilotti, R.. - ELETTRONICO. - (2006), pp. 195-202. (Intervento presentato al convegno 4th Nordic Conference on Human-Computer Interaction - Changing Roles, NordiCHI 2006 tenutosi a Oslo, Norway nel October 14-18, 2006) [10.1145/1182475.1182496].

Systematic Evaluation of e-Learning Systems: an Experimental Validation

Ardito, C.;
2006-01-01

Abstract

The evaluation of e-learning applications deserves special attention and evaluators need effective methodologies and appropriate guidelines to perform their task. We have proposed a methodology, called eLSE (e-Learning Systematic Evaluation), which combines a specific inspection technique with user-testing. This inspection aims at allowing inspectors that may not have a wide experience in evaluating e-learning systems to perform accurate evaluations. It is based on the use of evaluation patterns, called Abstract Tasks (ATs), which precisely describe the activities to be performed during inspection. For this reason, it is called AT inspection. In this paper, we present an empirical validation of the AT inspection technique: three groups of novice inspectors evaluated a commercial e-learning system applying the AT inspection, the heuristic inspection, or user-testing. Results have shown an advantage of the AT inspection over the other two usability evaluation methods, demonstrating that Abstract Tasks are effective and efficient tools to drive evaluators and improve their performance. Important methodological considerations on the reliability of usability evaluation techniques are discussed
2006
4th Nordic Conference on Human-Computer Interaction - Changing Roles, NordiCHI 2006
978-1-59593-325-6
Systematic Evaluation of e-Learning Systems: an Experimental Validation / Ardito, C.; Costabile, M. F.; De Angeli, A.; Lanzilotti, R.. - ELETTRONICO. - (2006), pp. 195-202. (Intervento presentato al convegno 4th Nordic Conference on Human-Computer Interaction - Changing Roles, NordiCHI 2006 tenutosi a Oslo, Norway nel October 14-18, 2006) [10.1145/1182475.1182496].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11589/193587
Citazioni
  • Scopus 23
  • ???jsp.display-item.citation.isi??? ND
social impact