Trends in Computer-Aided Verification

Seminar in Theoretical CS, Summer 2022

News

  • 25.11.2021: we are online

Dates & Deadlines

April 4Introductory video online
April 11Topic preferences due
May 9Detailed outline due
June 7Seminar report due
June 27Presentation slides due
July 12 & 13Seminar (room 9U10, building E3)
July 20Final revision of report

Introduction and Assignment of Topics

Overview

The term Computer-Aided Verification refers to theory and practice of computer-supported formal analysis methods for both hardware and software systems. Likewise, it is the name of an annual academic conference on that topic. The modeling and verification of such systems is an important issue. In particular, safety-critical systems are ones in which errors can be disastrous: loss of life, major financial losses, etc. Techniques to safeguard against such scenarios are essential for such systems. Testing can identify problems, especially if done in a rigorous fashion, but is generally not sufficient to guarantee a satisfactory level of quality.   Formal methods, on the other hand, offer techniques ranging from the description of requirements in a formal notation to allow for rigorous reasoning about them, to techniques for automatic verification of software. Due to the complexity of these approaches and the systems they are applied to, automated computer support is indispensable.  

The goal of this seminar is to give an overview of the related research activities of the MOVES group. It covers several areas of computer science to which computed-aided verification techniques are central, and which represent the research fields of the respective supervisors. Each area features a number of topics which are described in a scientific journal or conference article. These research articles are the basis on which students have to prepare their report and presentation. The following list gives an overview of the areas that will be covered.

Schedule of Talks

TimeTuesday, July 12th
13:30 – 15:00Christian Wende, Daniel Cegielski, Lukas Bücker
15:30 – 16:30Felix Kappel, Mayra Elwes
TimeWednesday, July 13th
09:00 – 10:30Stanislav Mironenko, Miriam Eisenhofer, Jialei Yao
11:00 – 12:00Laurenz Neumann, Mohammad Wissam Essa

Topics

(The annotations “B” and “M” respectively refer to topics on Bachelor and Master level.)

A. Robustness of Feed-Forward Neural Networks

Deployment of deep neural networks (DNNs) in safety- or security-critical systems requires provable guarantees on their correct behaviour. A common requirement is robustness to adversarial perturbations, i.e., the preservation of correct classification under slight variations of the input. More concretely, local (also called pointwise) robustness is defined with respect to an input point and its neighbourhood as the invariance of the classification over the neighbourhood. Global robustness measures the robustness on all (meaningful) inputs and can, e.g., be estimated as the expectation of local robustness over the test dataset weighted by some input distribution. When networks return incorrect output, they need to be modified without changing their overall behaviour and possibly introducing new bugs. This process is referred to as network repair. However, if the output is incorrect because the input is outside the domain of the trained network, a repair is not reasonable. Instead, the network should abstain from making a classification, and return an “unknown” label instead. To this end, the technique of novelty detection can be employed. In this part of the seminar, we are going to discuss several approaches to those four aspects for feed-forward neural networks.

  1. R. Feinman et al.: Detecting Adversarial Samples from Artifacts. ICML 2017 (B/M)
  2. T. Henzinger et al.: Outside the box: Abstraction-based monitoring of neural networks. ECAI 2020 (B/M)
  3. K. Leino et al.: Globally-Robust Neural Networks. PMLR 2021 (M)
    • Student: –
    • Supervisor: –
  4. V. Hashemi et al.: OSIP: Tightened Bound Propagation for the Verification of ReLU Neural Networks. SEFM 2021 (B/M)
  5. F. Fu, W. Li: Sound and Complete Neural Network Repair with Minimality and Locality Guarantees. arXiv 2021 (M)
    • Student: –
    • Supervisor: –

B. Verification of Recurrent Neural Networks

Recurrent neural networks are a particular kind of deep neural networks, with the ability to store information from previous evaluations in constructs called memory units. This differentiates them from feed-forward neural networks, where each evaluation of the network is performed independently of past evaluations. This enables applications such as machine translation and speaker recognition, and many other tasks where the network’s output might be affected by previously processed inputs. However, it also makes reasoning about a network’s behaviour a challenging problem.

  1. Yuval Jacoby, Clark Barrett, Guy Katz: Verifying Recurrent Neural Networks Using Invariant Inference. ATVA 2020 (B/M)
  2. Hongce Zhang, Maxwell Shinn, Aarti Gupta, Arie Gurfinkel, Nham Le, Nina Narodytska: Verification of recurrent neural networks for cognitive tasks via reachability analysis. ECAI 2020 (M)
  3. Du, T., Ji, S., Shen, L., Zhang, Y., Li, J., Shi, J., Fang, C., Yin, J., Beyah, R., Wang, T.: Cert-RNN: Towards certifying the robustness of recurrent neural networks. CCS 2021 (B/M)
    • Student: –
    • Supervisor: –
  4. Nicolas Papernot, Patrick McDaniel, Ananthram Swami, Richard Harang: Crafting Adversarial Input Sequences for Recurrent Neural Networks. Milcom 2016 (B/M)
  5. Michael E. Akintunde, Andreea Kevorchian, Alessio Lomuscio, Edoardo Pirovano: Verification of RNN-Based Neural Agent-Environment Systems. AAAI 2019 (B/M)
    • Student: –
    • Supervisor: –

C. Deductive Program Verification

Deductive verifiers take a program with a specification and try to prove that the program implements the specification. This proof requires the formal semantics of the programming language and – for automated verification – sophisticated algorithms to find proofs. In this part of the seminar, we look at deductive program semantics and how to build and validate deductive verifiers.

  1. Patrick Cousot, Radhia Cousot, Manuel Fahndrich, Francesco Logozzo: Automatic Inference of Necessary Preconditions. VMCAI 2013 (M)
    • Student: –
    • Supervisor: –
  2. Benjamin Lucien Kaminski, Joost-Pieter Katoen, Christoph Matheja: Expected Runtime Analysis by Program Verification. In Foundations of Probabilistic Programming, 2020 (B/M)
  3. Gilles Barthe, Thomas Espitau, Marco Gaboardi, Benjamin Grégoire, Justin Hsu, and Pierre-Yves Strub: An Assertion-Based Program Logic for Probabilistic Programs. ESOP 2018 (M)
  4. Peter Müller: Building Deductive Program Verifiers. In Engineering Secure and Dependable Software Systems, 2019 (B/M)
  5. Max Willsey, Chandrakana Nandi, Yisu Remy Wang, Oliver Flatt, Zachary Tatlock, Pavel Panchekha: egg: Fast and extensible equality saturation. POPL 2021 (B/M)
    • Student: –
    • Supervisor: –
  6. Gaurav Parthasarathy, Peter Müller, Alexander J. Summers: Formally Validating a Practical Verification Condition Generator. CAV 2021 (M)
    • Student: –
    • Supervisor:

D. Synthesizing Quantitative Loop Invariants for Probabilistic Programs

The hardest task in (probabilistic) program verification is arguably to reason about loops, especially those that induce infinite-state transition systems. To verify infinite-state systems, classical techniques, e.g., (probabilistic) bounded model checking, are typically combined with the search for a (quantitative) inductive loop invariant, i.e., a superset of the reachable states which is closed under the transition relation. Proving a —not necessarily inductive— safety property then amounts to synthesizing a sufficiently strong inductive invariant that excludes the bad states. In this part of the seminar, we are going to discuss several approaches to the (semi-)automated synthesis of quantitative loop invariants for probabilistic programs.

  1. Yu-Fang Chen, Chih-Duo Hong, Bow-Yaw Wang, Lijun Zhang: Counterexample-Guided Polynomial Loop Invariant Generation by Lagrange Interpolation. CAV 2015 (B/M)
    • Student: –
    • Supervisor: –
  2. Gilles Barthe, Thomas Espitau, Luis María Ferrer Fioriti, Justin Hsu: Synthesizing Probabilistic Invariants via Doob’s Decomposition. CAV 2016 (M)
  3. Yijun Feng, Lijun Zhang, David N. Jansen, Naijun Zhan, Bican Xia: Finding Polynomial Loop Invariants for Probabilistic Programs. ATVA 2017 (M)
    • Student: –
    • Supervisor: –
  4. Ezio Bartocci, Laura Kovács, Miroslav Stankovič: Automatic Generation of Moment-Based Invariants for Prob-Solvable Loops. ATVA 2019 (B/M)
    • Student: –
    • Supervisor:

E. Safety and Security Assessment

Many real-life systems like power plants, medical devices and data centers have to meet high standards, both in terms of safety (i.e. absence of unintentional failures) and security (i.e. no disruptions due to malicious attacks). Correspondingly, formal techniques for addressing both issues have been developed. In this part of the seminar, we will have a closer look at fault trees (safety) and attack trees (security).

  1. Matthias Volk, Sebastian Junges, Joost-Pieter Katoen: Fast Dynamic Fault Tree Analysis by Model Checking Techniques. IEEE Trans. Ind. Informatics 14(1), 2018 (B/M)
    • Student: Mohammad Wissam Essa
    • Supervisor: Jip Spel
  2. D. Szekeres, K. Marussy, I. Majzik: Tensor-based reliability analysis of complex static fault trees. EDCC 2021 (B/M)
    • Student: –
    • Supervisor: –
  3. B. Kordy, S. Mauw, S. Radomirović, P. Schweitzer: Attack–defense trees. Journal of Logic and Computation 24(1), 2014 (B/M)
  4. Heiko Mantel, Christian W. Probst: On the Meaning and Purpose of Attack Trees. CSF 2019 (B/M)
    • Student: –
    • Supervisor: –

Prerequisites

Basic knowledge in the following areas is expected:

  • Formal languages and automata theory
  • Mathematical logic
  • Probability Theory

Previous knowledge in semantics of programming languages and concurrency theory is helpful but not mandatory

Registration

Registration to the seminar is handled via the SuPra system.

Grading Scheme

You can access the grading scheme here: https://moves.rwth-aachen.de/wp-content/uploads/seminar_grading_scheme.pdf

Additional Material

Contact

Thomas Noll <noll at cs.rwth-aachen.de>