Machine Learning (ML) Models are shared over the internet, between teams, and are used to make critical decisions. Despite this, models are not scanned with the rigor of a PDF file in your inbox.
ModelScan, by Protect AI, is an open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats, including H5, Pickle, and SavedModel formats. This protects you when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.

The industry's first model scanner tool to support multiple model formats.
ModelScan provides protection against model serialization attacks such as

Credential Theft

Data Theft

Data Poisoning

Model Poisoning
Learn More
Announcing ModelScan: Open Source Protection Against Model Serialization Attacks
Aug 03, 2023 • Chris King • 6 min read
The Time is Now to Protect AI
Jul 27, 2023 • Ian Swanson • 3 min read
Alphabet Spells Out AI Security
Jun 15, 2023 • Daryan Dehghanpisheh • 3 min read