ScanMe / Models

Last scanned: 14 hours ago

Unsafe


Deserialization Threats

Code
PAIT-KERAS-100
Description
Keras Model Lambda Layer Can Execute Code At Load Time
Results

Suspicious

Code
PAIT-KERAS-101
Description
Keras Model Lambda Layer Arbitrary Code Execution Detected at Model Load Time
Results

Unsafe

Code
PAIT-PKL-100
Description
Pickle Model Arbitrary Code Execution Detected at Model Load Time
Results

Unsafe

Code
PAIT-PYTCH-100
Description
PyTorch Model Arbitrary Code Execution Detected at Model Load Time
Results

Unsafe

Code
PAIT-PYTCH-101
Description
PyTorch Model Suspicious Code Execution Detected at Model Load Time
Results

Suspicious

Backdoor Threats

Runtime Threats

Code
PAIT-TF-301
Description
TensorFlow SavedModel Contains Arbitrary Code Execution at Model Run Time
Results

Unsafe

Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Learn more
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.
Report your finding