Formal Verification Meets Machine Learning

Seminar in Theoretical CS, Summer Semester 2018


  • 19.01.2018: we are online


April 12 Kick-off meeting at seminar room of i2 (building E1, room 4201b);
here are the presentation slides
May 14 Detailed outline of report due
June 11 Full report due
July 2 Presentation slides due
July 19 Seminar talks


The term formal verification refers to theory and practice of computer-supported mathematical analysis methods for ensuring correctness of software (and hardware) systems. The modeling and verification of such systems is an important issue. In particular, safety-critical systems are ones in which errors can be disastrous: loss of life, major financial losses, etc. Techniques to safeguard against such scenarios are essential for such systems. Testing can identify problems, especially if done in a rigorous fashion, but is generally not sufficient to guarantee a satisfactory level of quality. The latter (additionally) requires proving correctness of systems via automated deduction, e.g., using theorem proving or model checking as a core reasoning method.

A prominent field related to artificial intelligence (AI) is the one of machine learning (ML). Very broadly, machine learning describes algorithms that independently learn from data that is observed in possibly unknown environments. Applications are – only to mention a few – given in autonomous systems, computer vision, and video games. In the real world, these previously not well-defined environments carry the risk of taking safety-critical action along the way of obtaining optimal strategies.

Growing application areas for machine learning, such as autonomous driving, require the exclusion or likely avoidance of unsafe behaviors. An important question is then, how confidence in system behaviors obtained from machine learning can be transferred to formal verification. Vice versa, industrial usage of verification methods such as model checking still suffers from scalability issues for large applications. Leveraging the capabilities of machine learning to assess large data sets will help to enable the verification for more realistic systems. The following directions are particularly important:

  • Safety-related issues for machine learning
  • Explainability in AI and in model checking
  • Scalability and applicability of formal verification


Basic knowledge in the following areas is expected:

  • Formal languages and automata theory
  • Mathematical logic

Previous knowledge in either of the two fields of formal verification and machine learning is helpful but not mandatory.


Verification of Properties

  1. Safety Verification of Deep Neural Networks
  2. Verification of Markov Decision Processes Using Learning Algorithms
  3. Smoothed Model Checking
  4. Formal Verification of Neural Networks
  5. Verification and Control of Partially Observable Probabilistic Systems
  6. Counterexample Explanation for Probabilistic Systems

Learning of Invariants

  1. Learning Software Invariants
  2. Learning Data Structure Invariants
  3. Syntax-Guided Invariant Synthesis
  4. Synthesizing Inductive Invariants

Model Learning

  1. Generating Models of Communication Protocols
  2. Learning Finite Automata
  3. Learning and Planning with Timing Information in Markov Decision Processes

Synthesis of Programs and Algorithms

  1. Policy Learning in Continuous-Time Markov Decision Processes
  2. Safety-Constrained Reinforcement Learning for Markov Decision Processes
  3. Learning Static Analyzers
  4. Learning Disjunctions of Predicates
  5. Learning Explanatory Rules from Noisy Data
  6. Discovery of Divide-&-Conquer Algorithms
  7. Learning by Program Synthesis
  8. Multi-Objective Policy Generation for Mobile Robots

Additional Material


Registration to the seminar is handled between January 19 and February 2, 2018, via the central online form.


Thomas Noll <noll at>