.. _foreword:
.. index:: Prover;prover
Driver;driver
Foreword
========
CAISAR is a platform for the Characterization of Artificial Intelligence Safety
And Robustness. It supports an expressive modelling language that allows the
specification of a wide variety of properties to enforce on a machine learning
system, and relies on external provers to discharge verification conditions.
CAISAR aims to be component-agnostic: it can model and assess the
trustworthiness of artificial intelligence systems that potentially mix both
symbolic- and data-driven approaches. It supports the verification of properties
for multiple machine learning models. Neural Network (NN) models are handled by means of
the `ONNX `_ exchange format. Support Vector Machines (SVM)
are supported by means of an ad-hoc CSV format.
The motivations and design ideas behind CAISAR have been presented at the
workshop `AISafety 2022 `_ [GABCL2022]_ and at the `Journées Francophones des Langages Applicatifs 2023 `_ [GABC2023]_.
Overall Design
--------------
CAISAR uses the `Why3 `_ platform, as a library, to
provide both the specification language, called WhyML, and the infrastructure
needed to interoperate with external provers. These are generally called
*provers*, although some of them do not directly provide a logical proof.
Availability
------------
The CAISAR platform page is https://git.frama-c.com/pub/caisar. The development
version is available there, in source format, together with this documentation
and several examples.
CAISAR is also distributed under the form of an `opam
`_ package or a `Docker `_
image.
CAISAR is distributed as open source and freely available under the terms
of the GNU LGPL v2.1.
See the file :file:`README.md` for quick installation instructions, and section
:ref:`installation` of this document for more detailed instructions.
Contact
-------
Report any bug to the CAISAR Bug Tracking System:
https://git.frama-c.com/pub/caisar/issues.
Acknowledgements
----------------
We gratefully thank the people who contributed to CAISAR, directly or
indirectly: Zakaria Chihani, Serge Durand, Tristan Le Gall, Augustin Lemesle,
Aymeric Varasse.
.. [GABC2023] Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z.,
*Caractériser des propriétés de confiance d’IA avec Why3*,
Journées Francophones des Langages Applicatifs (J FLA 2023)
.. [GABCL2022] Girard-Satabin, J., Alberti, M., Bobot, F., Chihani, Z., Lemesle,
A., *CAISAR: A platform for Characterizing Artificial
Intelligence Safety and Robustness*, Proceedings of the Workshop
on Artificial Intelligence Safety 2022 (AISafety 2022)