Seminar in Theoretical CS, Winter 2021/22

## News

- 19.06.2021: we are online

## Dates & Deadlines

13.10.2021, 15:30 | Introduction |

20.10.2021 | Topic preferences due |

15.11.2021 | Detailed outline due |

13.12.2021 | Seminar report due |

17.01.2022 | Presentation slides due |

08.-09.02.2022 | Seminar talks |

## Introduction and Assignment of Topics

## Overview

The term *formal verification* refers to theory and practice of computer-supported mathematical analysis methods for ensuring correctness of software (and hardware) systems. The modeling and verification of such systems is an important issue. In particular, safety-critical systems are ones in which errors can be disastrous: loss of life, major financial damage, etc. Techniques to safeguard against such scenarios are essential for such systems. Testing can identify problems, especially if done in a rigorous fashion, but is generally not sufficient to guarantee a satisfactory level of quality. The latter (additionally) requires proving correctness of systems via automated deduction, e.g., using theorem proving or model checking as a core reasoning method.

A prominent field related to *artificial intelligence* (AI) is the one of *machine learning* (ML). Very broadly, machine learning describes algorithms that independently *learn* from data that is observed in possibly unknown environments. Applications are – only to mention a few – given in autonomous systems, computer vision, and video games. In the real world, these previously not well-defined environments carry the risk of taking safety-critical action along the way of obtaining optimal strategies.

Growing application areas for machine learning, such as autonomous driving, require the exclusion or likely avoidance of unsafe behaviors. An important question is then, how confidence in system behaviors obtained from machine learning can be transferred to formal verification. Vice versa, industrial usage of verification methods such as model checking still suffers from scalability issues for large applications. Leveraging the capabilities of machine learning to assess large data sets will help to enable the verification for more realistic systems. In this seminar, the following approaches will be addressed:

- Employing learning techniques for software development and verification
- Safety analysis of deep neural networks

## Schedule of Talks

Time | Tuesday, February 8 | Wednesday, February 9 |

09:00-10:30 | TBA | TBA |

11:00-12:00 | TBA | TBA |

13:00-14:30 | TBA | TBA |

## Topics

### Approaches to Machine Learning

- Introduction to Automata Learning
- Paper: Bernhard Steffen, Falk Howar, Maik Merten:
*Introduction to Active Automata Learning from a Practical Perspective*. SFM 2011 - Student: Daniel Savchenko
- Supervisor: Shahid Khan

- Paper: Bernhard Steffen, Falk Howar, Maik Merten:
- Introduction to Reinforcement Learning
- Paper: Muddasar Naeem, Syed Tahir Hussain Rizvi, Antonio Coronato:
*A Gentle Introduction to Reinforcement Learning and its Application in Different Fields*, IEEE Access 8, 2020 - Student: Nicolay Strohschen
- Supervisor: Jana Berger

- Paper: Muddasar Naeem, Syed Tahir Hussain Rizvi, Antonio Coronato:

### Learning for Program Verification

- Learning Loop Invariants
- Paper: Pranav Garg, Christof Löding, P. Madhusudan, Daniel Neider:
*ICE: A Robust Framework for Learning Invariants*. CAV 2014 - Student: Ziyong Cheong
- Supervisor: Jana Berger

- Paper: Pranav Garg, Christof Löding, P. Madhusudan, Daniel Neider:
- Model Checking vs. Machine Learning
- Paper: Yakir Vizel, Arie Gurfinkel, Sharon Shoham, Sharad Malik:
*IC3 – Flipping the E in ICE*. VMCAI 2017 - Student: Moustafa Abdelsalam
- Supervisor: Jana Berger

- Paper: Yakir Vizel, Arie Gurfinkel, Sharon Shoham, Sharad Malik:
- Reinforcement Learning of Invariants
- Paper: Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, Le Song:
*Learning loop invariants for program verification*. NeurIPS 2018 - Student: –
- Supervisor: –

- Paper: Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, Le Song:
- Termination Analysis of Probabilistic Programs
- Paper: Alessandro Abate, Mirco Giacobbe, Diptarko Roy:
*Learning Probabilistic Termination Proofs*. CAV 2021 - Student: Daniel Buchholz
- Supervisor: Tobias Winkler

- Paper: Alessandro Abate, Mirco Giacobbe, Diptarko Roy:

### Learning for Program Synthesis

- Programming by Example
- Paper: Aditya Krishna Menon, Omer Tamuz, Sumit Gulwani, Butler W. Lampson, and Adam Kalai:
*A machine learning framework for programming by example*. ICML 2013 - Student: Daniel Woortmann
- Supervisor: Shahid Khan

- Paper: Aditya Krishna Menon, Omer Tamuz, Sumit Gulwani, Butler W. Lampson, and Adam Kalai:
- Synthesis of Probabilistic Programs
- Paper: Aditya V. Nori, Sherjil Ozair, Sriram K. Rajamani, and Deepak Vijaykeerthy:
*Efficient synthesis of probabilistic programs*. ACM SIGPLAN Notices 50, 2015 - Student: Thu Van Dao
- Supervisor: Lutz Klinkenberg

- Paper: Aditya V. Nori, Sherjil Ozair, Sriram K. Rajamani, and Deepak Vijaykeerthy:
- Learning Controllers under Safety Conditions
- Paper: Mohammed Alshiekh, Roderick Bloem, Rüdiger Ehlers, Bettina Könighofer, Scott Niekum, Ufuk Topcu:
*Safe Reinforcement Learning via Shielding*. AAAI 2018: 2669-2678 - Student: Constantin Venhoff
- Supervisor: Tobias Winkler

- Paper: Mohammed Alshiekh, Roderick Bloem, Rüdiger Ehlers, Bettina Könighofer, Scott Niekum, Ufuk Topcu:
- Combining Program Synthesis and Reinforcement Learning
- Paper: Yichen Yang, Jeevana Priya Inala, Osbert Bastani, Yewen Pu, Armando Solar-Lezama, Martin Rinard:
*Program Synthesis Guided Reinforcement Learning*. arXiv:2102.11137 (2021) - Student: –
- Supervisor: –

- Paper: Yichen Yang, Jeevana Priya Inala, Osbert Bastani, Yewen Pu, Armando Solar-Lezama, Martin Rinard:
- Deep Learning for Code Synthesis
- Paper: Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow:
*DeepCoder: Learning to Write Programs*. arXiv:1611.01989 - Student: Sascha Thiemann
- Supervisor: Tobias Winkler

- Paper: Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow:
- Learning Programs from Noisy Data
- Paper: Veselin Raychev, Pavol Bielik, Martin Vechev, Andreas Krause:
*Learning programs from noisy data*. POPL 2016 - Student: –
- Supervisor: –

- Paper: Veselin Raychev, Pavol Bielik, Martin Vechev, Andreas Krause:
- Supervised Learning for Program Correctness
- Paper: Rishabh Singh, Sumit Gulwani:
*Predicting a Correct Program in Programming by Example*. CAV 2015 - Student: Dominik Geißler
- Supervisor: Jip Spel

- Paper: Rishabh Singh, Sumit Gulwani:

### Verification of Deep Neural Networks

- An Abstract Domain for Certifying Neural Networks
- Paper: Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev: An Abstract Domain for Certifying Neural Networks. POPL 2019
- Student: Sven Büge
- Supervisor: Christopher Brix

- Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search
- Paper: Patrick Henriksen, Alessio Lomuscio: Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search. ECAI 2020
- Student: Nicat Sultanov
- Supervisor: Christopher Brix

- Correctness Verification by Tiling
- Paper: Yichen Yang, Martin Rinard:
*Correctness Verification of Neural Networks*. arXiv:1906.01030 (2019) - Student: Annika Rüll
- Supervisor: Christopher Brix

- Paper: Yichen Yang, Martin Rinard:
- Fast and Precise Certification of Transformers
- Paper: Gregory Bonaert, Maximilian Baader, Dimitar I. Dimitrov, Martin Vechev:
*Fast and Precise Certification of Transformers*. PLDI 2021

- Student: –
- Supervisor: –

- Paper: Gregory Bonaert, Maximilian Baader, Dimitar I. Dimitrov, Martin Vechev:
- Robustness Certification with Generative Models
- Paper: Matthew Mirman, Alexander Hägele, Pavol Bielik, Timon Gehr, Martin Vechev:
*Robustness Certification with Generative Models*PLDI 2021

- Student: –
- Supervisor: –

- Paper: Matthew Mirman, Alexander Hägele, Pavol Bielik, Timon Gehr, Martin Vechev:
- Provable Repair of Deep Neural Networks
- Paper: Matthew Sotoudeh, Aditya V. Thakur:
*Provable Repair of Deep Neural Networks*. arXiv:2104.04413 (2021)

- Student: Jonas Seidel
- Supervisor: Christopher Brix

- Paper: Matthew Sotoudeh, Aditya V. Thakur:
- Verifying Recurrent Neural Networks Using Invariant Inference
- Paper: Yuval Jacoby, Clark Barrett, Guy Katz:
*Verifying Recurrent Neural Networks Using Invariant Inference*. ATVA 2020

- Student: –
- Supervisor: –

- Paper: Yuval Jacoby, Clark Barrett, Guy Katz:
- Scalable Polyhedral Verification of Recurrent Neural Networks
- Paper: Wonryong Ryou, Jiayu Chen, Mislav Balunović, Gagandeep Singh, Andrei Dan and Martin Vechev:
*Scalable Polyhedral Verification of Recurrent Neural Networks*. CAV 2021

- Student: –
- Supervisor: –

- Paper: Wonryong Ryou, Jiayu Chen, Mislav Balunović, Gagandeep Singh, Andrei Dan and Martin Vechev:
- Property-Directed Verification of Recurrent Neural Networks
- Paper: Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye:
*Property-Directed Verification of Recurrent Neural Networks*. arXiv:2009.10610 (2020)

- Student: Niklas Molczanski
- Supervisor: Thomas Noll

- Paper: Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye:

## Prerequisites

Basic knowledge in the following areas is expected:

- Formal languages and automata theory
- Mathematical logic
- Probability Theory

Previous knowledge in machine learning and semantics of programming languages is helpful but not mandatory

## Registration

Registration to the seminar is handled via SuPra**.** Later registrations are not possible.

## Additional Material

- Report template
- Presentation template
- How to Write a Seminar Paper
- Ethische Richtlinien für das Verfassen wissenschaftlicher Arbeiten
- How to Give Presentations
- How to Design Talks (video, starting at 1:01:12)
- How to Give a Great Research Talk (video)

- Introduction to LaTeX

## Contact

Thomas Noll <noll at cs.rwth-aachen.de>