1. Foreword¶
CAISAR is a platform for the Characterization of Artificial Intelligence Safety And Robustness. It supports an expressive modelling language that allows the specification of a wide variety of properties to enforce on a machine learning system, and relies on external provers to discharge verification conditions.
CAISAR aims to be component-agnostic: it can model and assess the trustworthiness of artificial intelligence systems that potentially mix both symbolic- and data-driven approaches. It supports the verification of properties for multiple machine learning models. Neural Network (NN) models are handled by means of the ONNX exchange format. Support Vector Machines (SVM) are supported by means of an ad-hoc CSV format.
The motivations and design ideas behind CAISAR have been presented at the workshop AISafety 2022 [GABCL2022] and at the Journées Francophones des Langages Applicatifs 2023 [GABC2023].
1.1. Overall Design¶
CAISAR uses the Why3 platform, as a library, to provide both the specification language, called WhyML, and the infrastructure needed to interoperate with external provers. These are generally called provers, although some of them do not directly provide a logical proof.
1.2. Availability¶
The CAISAR platform page is https://git.frama-c.com/pub/caisar. The development version is available there, in source format, together with this documentation and several examples.
CAISAR is also distributed under the form of an opam package or a Docker image.
CAISAR is distributed as open source and freely available under the terms of the GNU LGPL v2.1.
See the file README.md
for quick installation instructions, and section
Installation of this document for more detailed instructions.
1.3. Contact¶
Report any bug to the CAISAR Bug Tracking System: https://git.frama-c.com/pub/caisar/issues.
1.4. Acknowledgements¶
We gratefully thank the people who contributed to CAISAR, directly or indirectly: Zakaria Chihani, Serge Durand, Tristan Le Gall, Augustin Lemesle, Aymeric Varasse.
- GABC2023
Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z., Caractériser des propriétés de confiance d’IA avec Why3, Journées Francophones des Langages Applicatifs (J FLA 2023)
- GABCL2022
Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z., Lemesle, A., CAISAR: A platform for Characterizing Artificial Intelligence Safety and Robustness, Proceedings of the Workshop on Artificial Intelligence Safety 2022 (AISafety 2022)