Seminar in Theoretical CS, Winter 2021/22

### News

- 19.06.2021: we are online

### Dates & Deadlines

TBA | Introduction |

TBA | Topic preferences due |

TBA | Detailed outline due |

TBA | Seminar report due |

TBA | Presentation slides due |

TBA | Seminar |

### Introduction and Assignment of Topics

- Slides of introduction

### Overview

The term *formal verification* refers to theory and practice of computer-supported mathematical analysis methods for ensuring correctness of software (and hardware) systems. The modeling and verification of such systems is an important issue. In particular, safety-critical systems are ones in which errors can be disastrous: loss of life, major financial losses, etc. Techniques to safeguard against such scenarios are essential for such systems. Testing can identify problems, especially if done in a rigorous fashion, but is generally not sufficient to guarantee a satisfactory level of quality. The latter (additionally) requires proving correctness of systems via automated deduction, e.g., using theorem proving or model checking as a core reasoning method.

A prominent field related to *artificial intelligence* (AI) is the one of *machine learning* (ML). Very broadly, machine learning describes algorithms that independently *learn* from data that is observed in possibly unknown environments. Applications are – only to mention a few – given in autonomous systems, computer vision, and video games. In the real world, these previously not well-defined environments carry the risk of taking safety-critical action along the way of obtaining optimal strategies.

Growing application areas for machine learning, such as autonomous driving, require the exclusion or likely avoidance of unsafe behaviors. An important question is then, how confidence in system behaviors obtained from machine learning can be transferred to formal verification. Vice versa, industrial usage of verification methods such as model checking still suffers from scalability issues for large applications. Leveraging the capabilities of machine learning to assess large data sets will help to enable the verification for more realistic systems. In this seminar, the following approaches will be addressed:

- Employing learning techniques for software development and verification
- Safety analysis of deep neural networks

### Topics

(The annotations “B” and “M” respectively refer to topics on Bachelor and Master level.)

#### Learning for Program Verification

- Pranav Garg, Christof Löding, P. Madhusudan, Daniel Neider:
*ICE: A Robust Framework for Learning Invariants*. CAV 2014, 69-87 - Alessandro Abate, Mirco Giacobbe and Diptarko Roy:
*Learning Probabilistic Termination Proofs*. CAV 2021 - Yakir Vizel, Arie Gurfinkel, Sharon Shoham, Sharad Malik:
*IC3 – Flipping the E in ICE*. VMCAI 2017 - Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, Le Song:
*Learning loop invariants for program verification*. NeurIPS 2018

#### Learning for Program Synthesis

- Yichen Yang, Jeevana Priya Inala, Osbert Bastani, Yewen Pu, Armando Solar-Lezama, Martin Rinard:
*Program Synthesis Guided Reinforcement Learning*. arXiv:2102.11137 (2021) - Mohammed Alshiekh, Roderick Bloem, Rüdiger Ehlers, Bettina Könighofer, Scott Niekum, Ufuk Topcu:
*Safe Reinforcement Learning via Shielding*. AAAI 2018: 2669-2678

#### Verification of Deep Neural Networks

- Yichen Yang, Martin Rinard:
*Correctness Verification of Neural Networks*. arXiv:1906.01030 (2019) - Gregory Bonaert, Maximilian Baader, Dimitar I. Dimitrov, Martin Vechev:
*Fast and Precise Certification of Transformers*. PLDI 2021 - Matthew Mirman, Timon Gehr, Martin Vechev:
*Robustness Certification of Generative Models*. arXiv:2004.14756 (2021) - Matthew Sotoudeh, Aditya V. Thakur:
*Provable Repair of Deep Neural Networks*. arXiv:2104.04413 (2021) - Yuval Jacoby, Clark Barrett, Guy Katz:
*Verifying Recurrent Neural Networks Using Invariant Inference*. ATVA 2020 - Wonryong Ryou, Jiayu Chen, Mislav Balunović, Gagandeep Singh, Andrei Dan and Martin Vechev:
*Scalable Polyhedral Verification of Recurrent Neural Networks*. CAV 2021

### Prerequisites

Basic knowledge in the following areas is expected:

- Formal languages and automata theory
- Mathematical logic
- Probability Theory

Previous knowledge in machine learning and semantics of programming languages is helpful but not mandatory

### Registration

Registration to the seminar is handled via SuPra**.** Later registrations are not possible.

### Additional Material

- Report template
- Presentation template
- How to Write a Seminar Paper
- Ethische Richtlinien für das Verfassen wissenschaftlicher Arbeiten
- How to Give Presentations
- Introduction to LaTeX

### Contact

Thomas Noll <noll at cs.rwth-aachen.de>