.. _foreword:
.. index:: Prover;prover
Driver;driver
Foreword
========
CAISAR is a platform for the Characterization of Artificial Intelligence Safety
And Robustness. It supports an expressive modelling language that allows the
specification of a wide variety of properties to enforce on a machine learning
system, and relies on external provers to discharge verification conditions.
CAISAR aims to be component-agnostic: it can model and assess the
trustworthiness of artificial intelligence systems that potentially mix both
symbolic- and data-driven approaches. It supports the verification of properties
for multiple machine learning models. Neural Network (NN) models are handled by means of
the `ONNX `_ exchange format. Support Vector Machines (SVM)
are supported by means of an ad-hoc CSV format.
The motivations and design ideas behind CAISAR have been presented in the following venues:
- `AISafety 2022 workshop `_ [GirardSatabin2022]_
- `Journées Francophones des Langages Applicatifs 2023 `_ [GirardSatabin2023]_.
- `integrated Formal Methods (iFM) `_ [Alberti2025]_.
- `Journées Francophones des Langages Applicatifs 2026 `_ [Ardouin2026]_.
Overall Design
--------------
CAISAR uses the `Why3 `_ platform, as a library, to
provide both the specification language, called WhyML, and the infrastructure
needed to interoperate with external provers. These are generally called
*provers*, although some of them do not directly provide a logical proof.
Availability
------------
The CAISAR platform page is https://git.frama-c.com/pub/caisar. The development
version is available there, in source format, together with this documentation
and several examples.
CAISAR is also distributed under the form of an `opam
`_ package or a `Docker `_
image.
CAISAR is distributed as open source and freely available under the terms
of the GNU LGPL v2.1.
See the file :file:`README.md` for quick installation instructions, and section
:ref:`installation` of this document for more detailed instructions.
Contact
-------
Report any bug to the CAISAR Bug Tracking System:
https://git.frama-c.com/pub/caisar/issues.
Acknowledgements
----------------
We gratefully thank the people who contributed to CAISAR, directly or
indirectly: Zakaria Chihani, Serge Durand, Tristan Le Gall, Augustin Lemesle,
Aymeric Varasse.
.. [GirardSatabin2023] Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z.,
*Caractériser des propriétés de confiance d’IA avec Why3*,
Journées Francophones des Langages Applicatifs (J FLA 2023)
.. [GirardSatabin2022] Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z., Lemesle,
A., *CAISAR: A platform for Characterizing Artificial
Intelligence Safety and Robustness*, Proceedings of the Workshop
on Artificial Intelligence Safety 2022 (AISafety 2022)
.. [Alberti2025] Alberti, M., Bobot, F., Girard-Satabin, J., Grastien, A.,
Varasse, A., Chihani, Z.,
*The CAISAR Platform: Extending the Reach of Machine Learning Specification and Verification*,
Proceedings of the integrated Formal Methods, Conference 2025
.. [Ardouin2026] Ardouin, G., Alberti, M., Girard-Satabin, J.,
*La confiance avec le contrôle: spécification et vérification d’hyperpropriétés sur réseaux de neurones*,
Journées Francophones des Langages Applicatifs (J FLA 2026)