CAISAR team at ESSAI 2025

For one week, members of the CAISAR team are giving a course at the European Summer School on Artificial Intelligence, titled Formal verification of symbolic and connectionist AI: a way toward higher quality software.

This page refers the material for the course.

Session 1

The first session slides are available here. This introductory session provided an historically informed introduction of formal verification, and its concrete applications to machine learning.

Session 2

The second session slides are available here. This session explores in depth the notion of local robustness and its centrality in the domain of formal verification of neural networks. Building intuition on abstract interpretation and SMT calculus, the session gives hints on how to actually solve this property.

Session 3

The third session slides are available here. Explainability of AI (xAI) is a very prolific field, with numerous tools trying to adress legal and ethical requirements. Yet, the notion of what constitutes an explanation is not clear. Furthermore, popular explanations techniques are not guaranteed to produce correct results. This session explores the field of xAI and proposes a perspective on how formal verification can be used to adress such concerns.

Session 4

The fourth session slides are available here. This session expands on local robustness and explore how it can be used to bugfix neural networks, or even be embedded into a loss. The topic of testing neural network is also adressed.

Session 5

The fifth and final session slides are available here. This session will presents the practicalities of formal verification of neural network, exploring the tools and specification languages used in the community. It will also briefly present existing venues for the field.