Providing an empirical basis for optimizing the verification and testing phases of software development

Lionel C. Briand, Victor R. Basili, Christopher J. Hetmanski

Research output: Contribution to journalConference articlepeer-review

Abstract

Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs, (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is to (1) measure the software system to be considered and (2) build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA Goddard Space Flight Center into two fault density classes: low and high. Also, we evaluate the accuracy of the model and the insights it provides into the software process.

Original languageEnglish
Article number285903
Pages (from-to)329-338
Number of pages10
JournalProceedings of the International Symposium on Software Reliability Engineering, ISSRE
DOIs
Publication statusPublished - 1992
Externally publishedYes
Event3rd International Symposium on Software Reliability Engineering, ISSRE 1992 - Research Triangle Park, United States
Duration: 7 Oct 199210 Oct 1992

Keywords

  • Fault-prone software components
  • Machine learning
  • Stochastic modeling

Fingerprint

Dive into the research topics of 'Providing an empirical basis for optimizing the verification and testing phases of software development'. Together they form a unique fingerprint.

Cite this