Suspicious
Deserialization Threats | |||||||
---|---|---|---|---|---|---|---|
Code | Description | Results | Link | ||||
Code PAIT-PYTCH-101 Description PyTorch Model Suspicious Code Execution Detected at Model Load Time Results Suspicious Link | PAIT-PYTCH-101 | PyTorch Model Suspicious Code Execution Detected at Model Load Time | Suspicious | Learn more |
Backdoor Threats |
---|
Runtime Threats |
---|
This is a flyout modal for py38-pylingual-v1-segmenter
Deserialization threats in AI and machine learning systems pose significant security risks, particularly in models serialized with the default tool in Python, Pickle.
If a model reportedly has this issue, it means:
Pickle is the original serialization Python module used for serializing and deserializing Python objects to share between processes or other computers. While convenient, Pickle poses significant security risks when used with untrusted data, as it can execute arbitrary code during deserialization. This makes it vulnerable to remote code execution attacks if an attacker can control the serialized data.
In this case, loading the model will execute the code, and whatever instructions have been inserted into it.
pickle
for serializationpickle
executes any embedded Python code when deserializingFurther reading:
This attack can harm your organization in the following ways:
If possible, use a different model format like SafeTensors in order to remove this type of code injection attack from impacting your work entirely.
If not possible, reach out to the model creator and alert them that the model has failed our scan. You can even link to the specific page on our Insights Database to provide our most up to date findings.
The model provider should also either report what they did to correct this issue, or disclose why their use of the suspicious operator found is legitimate, as part of their release notes.