Unscanned
This is a flyout modal for best.pt
Deserialization threats in AI and machine learning systems pose significant security risks, particularly in models serialized with the default tool in Python, Pickle.
If a model has been reported to fail for this issue, it means:
Pickle is the original serialization Python module used for serializing and deserializing Python objects to share between processes or other computers. While convenient, Pickle poses significant security risks when used with untrusted data, as it can execute arbitrary code during deserialization. This makes it vulnerable to remote code execution attacks if an attacker can control the serialized data.
In this case, loading the model will execute the code, and whatever malicious instructions have been inserted into it.
pickle
for serializationpickle
executes any embedded Python code when deserializingFurther reading:
This attack can harm your organization in the following ways:
sequenceDiagram participant Attacker participant Model File participant Victim's Machine participant Victim Attacker->>Attacker: Create malicious Python object Attacker->>Attacker: Serialize object using Pickle Attacker->>Model File: Embed serialized data in model artifact Attacker->>Victim: Distribute compromised model file Victim->>Victim's Machine: Load compromised model file Victim's Machine->>Model File: Deserialization process triggered Model File->>Victim's Machine: Malicious code executes Victim's Machine->>Attacker: Unauthorized access/control gained
If possible, use a different model format like SafeTensors in order to remove this type of code injection attack from impacting your work entirely.
If not possible, reach out to the model creator and alert them that the model has failed our scan. You can even link to the specific page on our Insights Database to provide our most up to date findings.
The model provider should also report what they did to correct this issue as part of their release notes.