PAI-favicon-120423 MLSecOps-favicon icon3

Red Team Tools Release

Security researchers on our AI bug bounty platform Huntr are finding impactful new vulnerabilities in fundamental AI libraries at breakneck speeds. We combed through the latest research to deliver the security community tools to help test their environments and stay safe. Today we are releasing 3 new Nuclei vulnerability scanning templates and Metasploit modules for several highly popular AI tools. Blue teamers can scan their environments to detect the vulnerabilities and red teamers can use the Metasploit modules to demonstrate impact.

Protect AI responsibly disclosed these vulnerabilities to the maintainers who rapidly fixed the issues and we deeply respect and appreciate the work they do. If you’re interested in helping secure the AI/ML ecosystem, sign up at huntr.com. 

 

BentoML Remote Code Execution

Severity: Critical
BentoML is an increasingly popular tool for packaging and deploying machine learning models. It simplifies the process of getting models into production environments and served up to a large audience which often means it's publicly accessible and is a very practical target for attackers looking to steal models, data, or simply get a foothold on the internal network.

Huntr researcher @pinkdraconian, discovered a remote code execution vulnerability in BentoML due to an unsafe deserialization issue when handling user input. An attacker can create a string object of an operating system command, use pickle to serialize the string into a pickled object, then send that pickled object to the BentoML server which will in turn execute the command. This critical flaw allows attackers to send a single request to the server to execute arbitrary code, leading to model or data theft, server takeover, or further network compromise.

Detailed report 

https://sightline.protectai.com/vulnerabilities/c9cb3654-6155-4e16-9dc0-46fa09d6a495/assess

Nuclei vulnerability scanning template

https://github.com/protectai/ai-exploits/blob/main/bentoml/bentoml-rce.yaml

Metasploit module

https://github.com/protectai/ai-exploits/tree/main/bentoml/bentoml_pickle_rce.py

 

Gradio LFI

Severity: High

Gradio is an almost ubiquitous library in the AI world that makes it simple to deploy a web interface around a machine learning model. The tool supports integration with major machine learning frameworks like TensorFlow or PyTorch and is widely used on Hugging Face to host demos of models. Gradio is often exposed publicly by large organizations meaning any attacks against it have potentially far-reaching consequences. 

Huntr researcher @ozelis found a vulnerability in Gradio that allows remote attackers to read arbitrary files on the Gradio server. Files like SSH keys, cloud credentials, even model files themselves are exposed in this local file include vulnerability which can lead to complete server takeover or access to your cloud environment. The vulnerability is due to a lack of user-input sanitization in a backend request that affects even the simplest demo example from the Gradio documentation. 


Detailed reporthttps://sightline.protectai.com/vulnerabilities/a29c8632-64a5-4a95-a21c-450eb1fc5c88/assess

Nuclei vulnerability scanning template

https://github.com/protectai/ai-exploits/tree/main/gradio/gradio-lfi.yaml

Metasploit module

https://github.com/protectai/ai-exploits/tree/main/gradio/gradio_lfi.py

 

FastAPI DoS 

Severity: High

FastAPI is a modern, fast web framework for building APIs with Python that's widely adopted in the tech community for its performance and ease of use by companies such as Microsoft, Uber, and Netflix. In the AI world, this library is often used for serving machine learning models in production. However, its widespread use also makes it a significant target.

The vulnerability was identified by Protect AI’s Threat Researcher @byt3bl33d3r, who built on the previous work of a Huntr researcher, @nicecatch2000. It involves a denial of service (DoS) attack that exploits a vulnerable regex in FastAPI's parsing of the Content-Type header. By sending only a handful of small requests, an attacker can effectively shut down the server.

 

Detailed report 

https://sightline.protectai.com/vulnerabilities/55f02557-83d2-4286-a83e-2ace613dee0c/assess

Nuclei vulnerability scanning template

https://github.com/protectai/ai-exploits/blob/main/fastapi/fastapi_dos.yaml

Metasploit module

https://github.com/protectai/ai-exploits/blob/main/fastapi/flask_dos.py 

 

At Protect AI we are taking a proactive approach to building a safer AI powered world. We are grateful to the huntr community of researchers and maintainers for the continued efforts in AI threat research via the huntr platform. The tools we have released today are now available in Sightline, where you will also find a full database of AI vulnerabilities to help you defend against these unique threats. To learn more, visit Sightline.