Protect AI | Blog

Announcing ModelScan: Open Source Protection Against Model Serialization Attacks

Written by Chris King | Aug 3, 2023 7:00:00 AM

Introduction

Machine Learning(ML) Models are shared over the internet, between teams, and are used to make critical decisions. Despite this, models are not scanned with the rigor of a PDF file in your inbox.

This needs to change, proper tooling is the first step.

We are thrilled to announce: ModelScan. An open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats. ModelScan currently supports: H5, Pickle, and SavedModel formats. This protects you when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.

To expand the functionality of ModelScan, we welcome community contributions. In that vein, we were heavily inspired by Matthieu Maitre who built PickleScan, THANK YOU! We hope our derived works expand on your vision and make you proud.

If you already know what Model Serialization Attacks are, and how they put you at risk; use the two snippets below to install ModelScan and scan your models before use.

# Install ModelScan:
pip install modelscan

# Scan a Model:
modelscan -p /PATH_TO/YOUR_MODEL.h5

Your scan will complete in moments:

For everyone else, let's start with an explanation of a Machine Learning Model. A model is just a file or series of files that contain a few things: vectors(the main data structure), algorithms, and optionally layers and transformations. The purpose of the model is to encapsulate everything that is needed to generate or predict things with the model.

Model formats that store code within the files themselves create this threat surface. Unfortunately, most of the popular model formats support code storage. This puts the risk on everyone(you) who is using these models. To understand the risk, you need to know how models can be exploited in this manner.

What is a Model Serialization Attack?

Models are created from automated pipelines, others may come from a data scientist’s laptop. In either case the model needs to move from one computer to another before it is used and widely adopted. That process of exporting a model is called serialization, it is what packages the model into a specific files for others to use.

A Model Serialization Attack is where malicious code is added to the contents of a model during serialization(saving) before distribution — a modern version of the Trojan Horse.

The attack functions by exploiting the saving and loading process of models. For example, when you load a model with model = torch.load(PATH), PyTorch opens the contents of the file and begins to running the code within. The second you load the model the exploit has executed.

A Model Serialization Attack can be used to execute:

  • Credential Theft - Cloud credentials for writing and reading data to other systems in your environment
  • Inference Data Poisoning - The data sent after the model has performed its task
  • Inference Data Theft - The request sent to the model
  • Model Poisoning - Altering the results of the model itself 
  • Privilege Escalation Attack - Using the credentials of the instance loading the model to attack other assets like training data

A thorough explanation of how attacks like this can be constructed and avoided is found in our deep dive: here

Are You Vulnerable?

Treat models as any other digital artifact, consider their source.

If all of your models are created in house, you trust all of the data scientists and machine learning engineers working and operating these systems, then you may consider your risk relatively small. However, insider threats exist, supply chain attacks on the open source foundations of ML happen, therefore there is risk.

If any models arrive in your organization from the web, then your total threat area has expanded. Even with everyone doing their best, the chance of an attack is now much higher.

You are vulnerable to this type of attack if you are loading models that were serialized in a format that supports code execution(most of them).

Importantly, a Model Serialization Attack is not detected by traditional anti-virus/malware solutions and require additional tooling for protection.

Your choice of machine learning framework or model provider may require a format that supports code execution. Often there is a tradeoff between security and convenience, making an informed choice strengthens your security posture.

How to Prevent Model Serialization Attacks

If you can, adopt safe formats that do not store any code. This will prevent the attack but they may not be entirely possible to adopt. That is why we built and released ModelScan.

ModelScan does not load the model and execute the code automatically, it loads the contents into memory and searches for unsafe code operations used in model serialization attacks. This approach keeps your environment totally safe even with a potentially compromised model. Additionally it is fast, evaluating many models in seconds.

Scan All Models Before Use

Ad-hoc scanning is a great first step, please drill it into yourself, peers, and friends to do this whenever they pull down a new model to explore. It is not sufficient to improve security for production MLOps processes.

Model scanning needs to be performed more than once to secure the model through its lifecycle, we recommend that you:

  1. Scan all pre-trained models before loading it for further work to prevent a compromised model from impacting your model building or data science environments.
  2. Scan all models after training to detect a supply chain attack that compromises new models.
  3. Scan all models before deploying to an endpoint to ensure that the model has not been compromised after storage.

The red blocks below highlight this in a traditional ML Pipeline:

Inside ML Pipelines you will embed a script that runs the following command per model before retraining, publishing to a registry, and before deployment:

modelscan -p /PATH_TO/YOUR_MODEL

This will take seconds to run on most models, and less than a few minutes on some of the largest LLMs available today.

This is the output if issues are found:

If an issue is found within the model, reach out to the creators of the model before using it to understand why they are delivering unsafe code. Do not use the model without a thorough investigation.

Adopt the following processes:

  • Add ModelScan to your standard list of packages installed on all data science environments, guiding users to scan any models that they find before using them.
  • Add ModelScan to your machine learning pipeline so it can be used after any step.
  • Before a file enters your Model Registry, scan it.
  • Before you deploy a model to a production endpoint, scan it.
  • Only use models from trusted sources and always validate their checksums before use.

Implementing those processes to your MLOps practice is a great step to expanding your practice to MLSecOps and embedding security as effectively there as your traditional application development practices.

Not all models in your environment come from your pipelines, and many are loaded to explore and better understand other approaches. They are still capable of compromising your security. ModelScan is built to work locally for your data scientists and other engineers as well.

When a data scientist starts to explore a new model from HuggingFace, make it second nature for them to scan the model before use. We’ve made it easy with direct HuggingFace support:

 

# Scan the model `structured-data-classification-grn-vsn` from Keras-IO
modelscan -hf keras-io/structured-data-classification-grn-vsn

Getting Started

ModelScan can secure your ML pipelines, inference endpoints(at any stage), and data science environments without restricting creativity or performance.

Implementation can be done in just a few minutes.

For the latest installation instructions, code samples, and documentation check out the project on GitHub: https://github.com/protectai/modelscan.

  1. Scan models before use.
  2. Share ModelScan with anyone you know working in machine learning.
  3. Demand model providers share results on their models when publishing.

Together we improve machine learning security, and help build a safer AI-powered world.

Seriously, scan any model you don't trust before loading it for use.