TY - JOUR
T1 - Scalable and Accurate Test Case Prioritization in Continuous Integration Contexts
AU - Yaraghi, Ahmadreza Saboor
AU - Bagherzadeh, Mojtaba
AU - Kahani, Nafiseh
AU - Briand, Lionel C.
N1 - Publisher Copyright:
© 1976-2012 IEEE.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - Continuous Integration (CI) requires efficient regression testing to ensure software quality without significantly delaying its CI builds. This warrants the need for techniques to reduce regression testing time, such as Test Case Prioritization (TCP) techniques that prioritize the execution of test cases to detect faults as early as possible. Many recent TCP studies employ various Machine Learning (ML) techniques to deal with the dynamic and complex nature of CI. However, most of them use a limited number of features for training ML models and evaluate the models on subjects for which the application of TCP makes little practical sense, due to their small regression testing time and low number of failed builds. In this work, we first define, at a conceptual level, a data model that captures data sources and their relations in a typical CI environment. Second, based on this data model, we define a comprehensive set of features that covers all features previously used by related studies. Third, we develop methods and tools to collect the defined features for 25 open-source software systems with enough failed builds and whose regression testing takes at least five minutes. Fourth, relying on the collected dataset containing a comprehensive feature set, we answer four research questions concerning data collection time, the effectiveness of ML-based TCP, the impact of the features on effectiveness, the decay of ML-based TCP models over time, and the trade-off between data collection time and the effectiveness of ML-based TCP techniques.
AB - Continuous Integration (CI) requires efficient regression testing to ensure software quality without significantly delaying its CI builds. This warrants the need for techniques to reduce regression testing time, such as Test Case Prioritization (TCP) techniques that prioritize the execution of test cases to detect faults as early as possible. Many recent TCP studies employ various Machine Learning (ML) techniques to deal with the dynamic and complex nature of CI. However, most of them use a limited number of features for training ML models and evaluate the models on subjects for which the application of TCP makes little practical sense, due to their small regression testing time and low number of failed builds. In this work, we first define, at a conceptual level, a data model that captures data sources and their relations in a typical CI environment. Second, based on this data model, we define a comprehensive set of features that covers all features previously used by related studies. Third, we develop methods and tools to collect the defined features for 25 open-source software systems with enough failed builds and whose regression testing takes at least five minutes. Fourth, relying on the collected dataset containing a comprehensive feature set, we answer four research questions concerning data collection time, the effectiveness of ML-based TCP, the impact of the features on effectiveness, the decay of ML-based TCP models over time, and the trade-off between data collection time and the effectiveness of ML-based TCP techniques.
KW - Machine learning
KW - continuous integration
KW - software testing
KW - test case prioritization
KW - test case selection
UR - http://www.scopus.com/inward/record.url?scp=85133610054&partnerID=8YFLogxK
U2 - 10.1109/TSE.2022.3184842
DO - 10.1109/TSE.2022.3184842
M3 - Article
AN - SCOPUS:85133610054
SN - 0098-5589
VL - 49
SP - 1615
EP - 1639
JO - IEEE Transactions on Software Engineering
JF - IEEE Transactions on Software Engineering
IS - 4
ER -