Published October 1, 2025
Library graduated

Robust-AI

PythonPackage

Maintainer:IRT-SystemX

Description

Train neural network models that are robust to adversarial attacks Implementation of the Projected Gradient Descent (PGD) attack as the primary technique to estimate and assess the robustness of a model. The models are built upon an industrial use-case (Not-limited to) of visual inspection and tested with the integration of Cracked pavement detection (SDNET2018 dataset). We implement several state-of-the-art adversarial training methods on different DNNs architectures to enhance the model's performance against adversarial attacks.

Owner:IRT-SystemX

Keywords:ML-trainingadversarial-attackrobust-ai

CONTEXT
Robustness is a key point to build Trustworthy AI systems. It ensures that the behaviour of an AI component in production is stable when facing to different kind of perturbations. Adversarial attacks are a kind of intentional pertubation created by malicious people making subtle modifications to AI input data to deceive an artificial intelligence system, while remaining imperceptible to human observers.
VALUE PROPOSITION
Robust-ai is a library of training methods designed to improve the robustness of computer vision models. It proposes some architectures and many training methods to easily train an AI model that is robust to adversarial attacks.
WHEN TO USE IT
It shall be used by data scientist for model development during the activity 'Develop and acquire ML models'.
RESOURCES