<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">

ModelScan-icon-logo-white ModelScan

Protection Against Model Serialization Attacks

Machine Learning (ML) Models are shared over the internet, between teams, and are used to make critical decisions. Despite this, models are not scanned with the rigor of a PDF file in your inbox.

ModelScan, by Protect AI,  is an open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats, including H5, Pickle, and SavedModel formats. This protects you when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.



ModelScan illustration-091223@2x


Protect AI website triangles_Final Black facing down


The industry's first model scanner tool to support multiple model formats.

ModelScan provides protection against model serialization attacks such as


PAI-Icon-Credential Theft-Tang

Credential Theft

Cloud credentials for writing and reading data to other systems in your environment
PAI-Icon-Data Theft-Tang

Data Theft

The request sent to the model
PAI-Icon-Data Poisoning-Tang

Data Poisoning

The data sent after the model has performed its task
PAI-Icon-Model Poisoning-Tang-1

Model Poisoning

Altering the results of the model itself