PAI-favicon-120423 icon2 icon3
Protect AI-product logo-ModelScan-symbol-whiteModelscan

Protect Models From Attacks

ModelScan is the industry's first Machine Learning (ML) Model scanner to support multiple formats. It scans to determine if they contain unsafe code, enabling you to deploy safer AI. It is Open Source and free to use.

Learn More

The Most Comprehensive Model Scanner

ML models are shared over the internet, between teams, and are used to make critical decisions. Despite this, models are not scanned with the rigor of a PDF file in your inbox. ModelScan is an open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats, including H5, Pickle, and SavedModel formats. This protects you when using any format tool. 

Formats Supported:

scikit learn

Protection Against Model Serialization Attacks

Models often need to move from one machine to another before deployment. The process of saving a model to it's destination is called serialization. A Model Serialization Attack is where malicious code is injected to the contents of a model during that process. ModelScan helps protect you from various types of Model Serialization Attacks.


Credential Theft

Cloud credentials for writing and reading data to other systems in your environment are at risk of being compromised. 


Data Theft

The request during the serialization process that is sent to the model is exposed to theft of the data used to build the model. 

PAI-Icon-32x32-Data poisoning-purple copy

Data Poisoning

The data sent after the model performs it's task can be poisoned, opening you up to security risks. 

PAI-Icon-32x32-Model poisoning-purple

Model Poisoning

If an attack takes place, the results of the model itself can be poisoned, which has a downstream impact on the AI application it is powering.

Get Started with ModelScan

Check out our GitHub page to get started with ModelScan

Learn More