Abstract
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs, (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is to (1) measure the software system to be considered and (2) build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA Goddard Space Flight Center into two fault density classes: low and high. Also, we evaluate the accuracy of the model and the insights it provides into the software process.
Original language | English |
---|---|
Article number | 285903 |
Pages (from-to) | 329-338 |
Number of pages | 10 |
Journal | Proceedings of the International Symposium on Software Reliability Engineering, ISSRE |
DOIs | |
Publication status | Published - 1992 |
Externally published | Yes |
Event | 3rd International Symposium on Software Reliability Engineering, ISSRE 1992 - Research Triangle Park, United States Duration: 7 Oct 1992 → 10 Oct 1992 |
Keywords
- Fault-prone software components
- Machine learning
- Stochastic modeling