Abstract
Test-based model generation by classical automata learning is very expensive. It requires an impractically large number of queries to the system, each of which must be implemented as a system-level test case. Key in the tractability of observation-based model generation are powerful optimizations exploiting different kinds of expert knowledge in order to drastically reduce the number of required queries, and thus the testing effort. In this paper, we present a thorough experimental analysis of the second-order effects between such optimizations in order to maximize their combined impact.
Original language | English |
---|---|
Pages (from-to) | 147-156 |
Number of pages | 10 |
Journal | Innovations in Systems and Software Engineering |
Volume | 1 |
Issue number | 2 |
DOIs | |
Publication status | Published - Sep 2005 |
Externally published | Yes |