TY - GEN
T1 - Predicting fault-prone components in a Java legacy system
AU - Arisholm, Erik
AU - Briand, Lionel C.
PY - 2006
Y1 - 2006
N2 - This paper reports on the construction and validation of fault-proneness prediction models in the context of an object-oriented, evolving, legacy system. The goal is to help QA engineers focus their limited verification resources on parts of the system likely to contain faults. A number of measures including code quality, class structure, changes in class structure, and the history of class-level changes and faults are included as candidate predictors of class fault-proneness. A cross-validated classification analysis shows that the obtained model has less than 20% of false positives and false negatives, respectively. However, as shown in this paper, statistics regarding the classification accuracy tend to inflate the potential usefulness of the fault-proneness prediction models. We thus propose a simple and pragmatic methodology for assessing the cost-effectiveness of the predictions to focus verification effort. On the basis of the cost-effectiveness analysis we show that change and fault data from previous releases is paramount to developing a practically useful prediction model. When our model is applied to predict faults in a new release, the estimated potential savings in verification effort is about 29%. In contrast, the estimated savings in verification effort drops to 0% when history data is not included.
AB - This paper reports on the construction and validation of fault-proneness prediction models in the context of an object-oriented, evolving, legacy system. The goal is to help QA engineers focus their limited verification resources on parts of the system likely to contain faults. A number of measures including code quality, class structure, changes in class structure, and the history of class-level changes and faults are included as candidate predictors of class fault-proneness. A cross-validated classification analysis shows that the obtained model has less than 20% of false positives and false negatives, respectively. However, as shown in this paper, statistics regarding the classification accuracy tend to inflate the potential usefulness of the fault-proneness prediction models. We thus propose a simple and pragmatic methodology for assessing the cost-effectiveness of the predictions to focus verification effort. On the basis of the cost-effectiveness analysis we show that change and fault data from previous releases is paramount to developing a practically useful prediction model. When our model is applied to predict faults in a new release, the estimated potential savings in verification effort is about 29%. In contrast, the estimated savings in verification effort drops to 0% when history data is not included.
KW - Design
KW - Measurement
KW - Verification
UR - http://www.scopus.com/inward/record.url?scp=34247367238&partnerID=8YFLogxK
U2 - 10.1145/1159733.1159738
DO - 10.1145/1159733.1159738
M3 - Conference contribution
AN - SCOPUS:34247367238
SN - 1595932186
SN - 9781595932181
T3 - ISESE'06 - Proceedings of the 5th ACM-IEEE International Symposium on Empirical Software Engineering
SP - 8
EP - 17
BT - ISCE'06 - Proceedings of the 5th ACM-IEEE International Symposium on Empirical Software Engineering
T2 - ISCE'06 - 5th ACM-IEEE International Symposium on Empirical Software Engineering
Y2 - 21 September 2006 through 22 September 2006
ER -