An assessment of published evaluations of requirements management tools

Austen Rainer, Sarah Beecham, Cei Sanderson

Research output: Contribution to conferencePaperpeer-review

Abstract

Context: The traditional literature review is a low cost, relatively quick but potentially ineffective method for evaluating tools. Practitioners appear to place a greater emphasis on the practical constraints of an evaluation (e.g. that it is low cost and quick) and the efficacy of the technology to the company, rather than on generic scientific results. By contrast, academia appears to place greater emphasis on theory confirmation, rigour and validity, and their literature reviews focus on literature published in peer-reviewed journals and conferences, and tend not to consider the trade and ‘grey’ literature. Objectives: To assess the quality and quantity of published evaluations of requirements management tools (RMTs) reported in the academic, ‘grey’ and trade literatures. Method: Three independent literature reviews were conducted to identify published evaluations of RMTs. The three reviews were conducted by three different types of reviewers: a practitioner in a company, an experienced researcher, and 19 final-year undergraduate students. The researcher and the students followed a version of Evidence Based Software Engineering to undertake their literature reviews. The practitioner undertook an ad hoc literature review. Publications were then screened to select higher-quality evaluations, which were then analysed to identify the RMTs evaluated. Results: The three literature reviews found a total of 28 evaluations referring to 14 RMTs, of which 6 evaluations were duplicates, giving 22 unique evaluations. Evaluations were identified between approximately the year 2000 and 2007, with an average of about 3 evaluations published per year. Conclusions/implications: Given the number of commercial RMTs on the market (>40), and the few evaluations published per year, there are surprisingly few higher-quality evaluations. There is a noticeable bias toward evaluating the market leading RMTs. Given the rate of change in the IT industry, there may be a need to re-evaluate RMTs every two years or less. Overall, there appears to be a poor ‘base’ of up-to-date published evaluations of RMTs available for use in literature reviews. Literature reviews would appear to be useful for short-listing RMTs for subsequent in-company evaluation, and for benchmarking, but care should be taken to include non-market leading RMTs in the shortlisting.

Original languageEnglish
Publication statusPublished - 2009
Externally publishedYes
Event13th International Conference on Evaluation and Assessment in Software Engineering, EASE 2009 - Durham, United Kingdom
Duration: 20 Apr 200921 Apr 2009

Conference

Conference13th International Conference on Evaluation and Assessment in Software Engineering, EASE 2009
Country/TerritoryUnited Kingdom
CityDurham
Period20/04/0921/04/09

Keywords

  • Evaluation
  • Literature Review
  • Requirements Management Tool
  • Scoping study
  • Systematic Mapping

Fingerprint

Dive into the research topics of 'An assessment of published evaluations of requirements management tools'. Together they form a unique fingerprint.

Cite this