Runtime verification (RV) is a research area of computer science dealing with analyzing the behavior of programs. In its simplest form, it checks whether a system execution complies with a formally specified correctness property. Typically, given such a property, a monitor is synthesized and the monitor indicates whether a violation has occurred, signalling a potential problem in the system. But also topics like enforcing correctness or properties at runtime and reacting to identified problems in the underlying system are often studied in the runtime verification community.
Model-based diagnosis (MBD) in computer science and AI addresses the problem of detecting if a system is not functioning correctly, and determining as accurately as possible which part of the system is failing and which kind of fault is present if a deviation from its expected behavior has occurred. It is based on formal models of the system’s component behavior and interconnections to hypothesize (a minimal number of) faults in the system that would best explain the deviations between the values predicted by the model and the actual observations.
MBD – and the related field Fault Detection, Identification and Reconfiguration (FDIR) in control engineering – extends and complements RV by not just detecting incorrect system behavior, but also identifying its possible root causes. Determining these causes is essential for formulating effective mitigation strategies. Diagnostic capabilities play also a crucial role in enhancing system resilience, which is the intrinsic capability of a system to maintain its required functionality when impacted by anticipated and unanticipated contingencies that may not have been explicitly considered during system design.
To date, a number of approaches have been proposed for bringing together RV and diagnosis. In particular, monitors that augment a system to check desired properties on actual trace data and report violation or validation at run-time can be used as a form of observations about the system. For example, (Bauer, Leucker and Schallhart 2006) proposed a framework for runtime reflection based on this idea, computing conflicts and diagnoses from a model of component dependencies. (Natan, Kalech, and Bartak 2023) consider multi-agent systems where the agents and a system goal are observed across multiple executions (runs) and spectrum-based diagnosis methods are used to explain failed runs. Assumption-based RV (Cimatti, Tian, and Tonetta 2019) is a framework where monitor outputs (properties satisfied or not satisfied) are conditioned on the assumption that the involved parts of the system are behaving correctly, allowing one to explicitly reason about these health assumptions in a model-checking framework .
However, several challenges and open research questions remain regarding the integration of diagnosis and RV. These challenges include accurately attributing the violations of one or more system properties observed in execution traces to specific failing components. A key aspect is determining how to effectively combine and utilize observations from multiple monitors and diverse time points. Additionally, computational issues emerge when trying to link faults in system components whose behavior is modeled as propositional logic sentences or finite state machines to violations of (arbitrary) temporal logic properties monitored during system execution.
In this workshop, we aim to explore these questions through presentations and focused discussions, bringing together experts from both fields. The topics will include formal methods, theoretical frameworks, and computational approaches tailored to integrated RV and diagnosis tasks – covering logic-based, temporal, discrete-event, qualitative, continuous, hybrid, and probabilistic methods. Furthermore, the workshop will highlight real-world applications in domains such as reliable software systems, resilient cyber-physical systems, digital twins, and explainable AI.