Skip to main navigation Skip to search Skip to main content

Formal Methods Meet AI Guidance for Correct Software Development

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Active automata learning is an effective approach for deriving behavioral models in learning-based software testing. A major obstacle, however, is defining the learning alphabet that captures possible user interactions with the system under test. We propose an AI-assisted method that leverages large language models (LLMs) to automate and guide this process. Our approach uses two designed prompts that analyze a web application’s source code: the first extracts all possible frontend interactions and presents them as an editable, human-readable list for validation and abstraction refinement; the second uses this refined list and the source code to define concrete alphabet symbols for learning including their implementation. This method lowers the entry barrier to applying active automata learning and improves the quality of resulting models. We demonstrate its effectiveness on a real web application and show how the learned model can serve as behavioral reference for formal analysis and comparison.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science
PublisherSpringer Science and Business Media Deutschland GmbH
Pages117-137
Number of pages21
DOIs
Publication statusPublished - 2026

Publication series

NameLecture Notes in Computer Science
Volume16470 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Keywords

  • Active automata learning
  • Formal methods
  • High assurance systems
  • Model driven development
  • Neurosymbolic design

Fingerprint

Dive into the research topics of 'Formal Methods Meet AI Guidance for Correct Software Development'. Together they form a unique fingerprint.

Cite this