TY - JOUR
T1 - Dynamic testing via automata learning
AU - Raffelt, Harald
AU - Merten, Maik
AU - Steffen, Bernhard
AU - Margaria, Tiziana
PY - 2009
Y1 - 2009
N2 - This paper presents dynamic testing, a method that exploits automata learning to systematically test (black box) systems almost without prerequisites. Based on interface descriptions and optional sample test cases, our method successively explores the system under test (SUT), in order to extrapolate a behavioural model. This is in turn used to steer the further exploration process. Due to the applied learning technique, our method is optimal in the sense that the extrapolated models are most concise (i.e. state minimal) in consistently representing all the information gathered during the exploration. Using the LearnLib, our framework for automata learning, our method can be elegantly combined with numerous optimisations of the learning procedure, with various choices of model structures, and with the option of dynamically/ interactively enlarging the alphabet underlying the learning process. The latter is important in the Web context, where totally new situations may arise when following links. All these features are illustrated using as a case study the web application Mantis, a bug tracking system widely used in practice. In addition, we present another case study that demonstrates the scalability of the approach. We show how the dynamic testing procedure works and how behavioural models arise that concisely summarize the current testing effort. It turns out that these models reveal the system structure from a user perspective. Besides steering the automatic exploration process, they are ideal for user guidance and to support analyses to improve the system understanding, as they reveal the system structure from a user perspective.
AB - This paper presents dynamic testing, a method that exploits automata learning to systematically test (black box) systems almost without prerequisites. Based on interface descriptions and optional sample test cases, our method successively explores the system under test (SUT), in order to extrapolate a behavioural model. This is in turn used to steer the further exploration process. Due to the applied learning technique, our method is optimal in the sense that the extrapolated models are most concise (i.e. state minimal) in consistently representing all the information gathered during the exploration. Using the LearnLib, our framework for automata learning, our method can be elegantly combined with numerous optimisations of the learning procedure, with various choices of model structures, and with the option of dynamically/ interactively enlarging the alphabet underlying the learning process. The latter is important in the Web context, where totally new situations may arise when following links. All these features are illustrated using as a case study the web application Mantis, a bug tracking system widely used in practice. In addition, we present another case study that demonstrates the scalability of the approach. We show how the dynamic testing procedure works and how behavioural models arise that concisely summarize the current testing effort. It turns out that these models reveal the system structure from a user perspective. Besides steering the automatic exploration process, they are ideal for user guidance and to support analyses to improve the system understanding, as they reveal the system structure from a user perspective.
UR - http://www.scopus.com/inward/record.url?scp=70350430875&partnerID=8YFLogxK
U2 - 10.1007/s10009-009-0120-7
DO - 10.1007/s10009-009-0120-7
M3 - Article
AN - SCOPUS:70350430875
SN - 1433-2779
VL - 11
SP - 307
EP - 324
JO - International Journal on Software Tools for Technology Transfer
JF - International Journal on Software Tools for Technology Transfer
IS - 4
ER -