Unsafe
This is a flyout modal for training_args.bin
Deserialization threats in AI and machine learning systems pose significant security risks, particularly in models serialized with the default tool in Python, Pickle.
If a model has been reported to fail for this issue, it means:
Pickle is the original serialization Python module used for serializing and deserializing Python objects to share between processes or other computers. While convenient, Pickle poses significant security risks when used with untrusted data, as it can execute arbitrary code during deserialization. This makes it vulnerable to remote code execution attacks if an attacker can control the serialized data.
In this case, loading the model will execute the code, and whatever malicious instructions have been inserted into it.
pickle
for serializationpickle
executes any embedded Python code when deserializingFurther reading:
This attack can harm your organization in the following ways:
If possible, use a different model format like SafeTensors in order to remove this type of code injection attack from impacting your work entirely.
If not possible, reach out to the model creator and alert them that the model has failed our scan. You can even link to the specific page on our Insights Database to provide our most up to date findings.
The model provider should also report what they did to correct this issue as part of their release notes.