Unsafe
This is a flyout modal for sub_model.h5
Deserialization threats in AI and machine learning systems pose significant security risks, particularly when using Keras models with Lambda Layers. This article outlines the specific threat for Lambda Layers at model load time, its potential impact, and provides actionable steps to mitigate these risks.
Models flagged for this threat meet the following criteria::
Keras is an open-source, deep learning library for developing machine learning models. Keras is a high level API that can work with other machine learning libraries seamlessly such as Tensorflow.
A Lambda layer in Keras is a special kind of layer that allows custom operations/transformation on data within a neural network. Normally, Keras provides predefined layers like Dense (for fully connected layers) or Conv2D (for convolutional layers), but a Lambda layer is used when a unique transformation or calculation that isn’t available in the standard layers is required.
Though a Lambda layer in Keras gives the flexibility to customize the operations within a neural network it also allows for arbitrary code execution. In this case, an attacker can exploit the Lambda layer usage in a Keras model to execute malicious code.
Further reading:
This model was not found to contain any malicious code within its layers at this time, however remote calls can still create an attack surface for exploitation and should be avoided.
An attacker could exploit a compromised model to:
Note: Malicious code execution using Lambda layer can be achieved without impacting a models performance - the user may never know that the attack has happened or is ongoing
If possible, avoid using Keras models with Lambda layers since a lambda layer allows for arbitrary code execution that can be exploited by an attacker.