Introduction
At Protect AI we are taking a proactive approach to identifying and addressing security risks in AI systems, to provide the world with critical intelligence on vulnerabilities and how to fix them.
Protect AI’s huntr is the world's first AI/ML bug bounty program. Our community of 15,000+ members hunt for impactful vulnerabilities across the entire OSS AI/ML supply chain. Through our own research and the huntr community, we’ve found the tools used in the supply chain to build the machine learning models that power AI applications to be vulnerable to unique security threats. These tools are Open Source and downloaded thousands of times a month to build enterprise AI Systems. They also likely come out of the box with vulnerabilities that can lead directly to complete system takeovers such as unauthenticated remote code execution or local file inclusion. This report contains 32 vulnerabilities You can find all the details of this month's vulnerabilities in the table below, or you can head over to protectai.com/sightline, to search the comprehensive database of all huntr findings, and download tools to detect, assess and remediate them within your organizations AI Supply chain.
It is important to note that all vulnerabilities were reported to the maintainers a minimum of 45 days prior to publishing this report, and we continue to work with maintainers to ensure a timely fix prior to publication. The table also includes our recommendations for actions to take immediately, if you have these projects in production. If you need help mitigating these vulnerabilities in the meantime, please reach out, we’re here to help. community@protectai.com.
The Month's Top Vulnerabilities
Remote Code Execution (RCE) in LoLLMs
https://huntr.com/bounties/a55a8c04-df44-49b2-bcfa-2a2b728a299d
Impact: This vulnerability can lead to an attacker running arbitrary code on the server.
A vulnerability present in older versions of llama-cpp-python combined with the binding_zoo feature in the LoLLMs webserver can allow attackers to use a malicious 3rd party hosted model to execute code remotely.
Denial of Service (DOS) in mintplex-labs/anything-llm
https://huntr.com/bounties/7bb08e7b-fd99-411e-99bc-07f81f474635/
Impact: This vulnerability allows an attacker to shut down the server through the file upload endpoint.
The vulnerability is present in the file upload endpoint, where a specially crafted request can cause the server to shut down. This issue arises from the server's inability to properly handle certain types of upload requests, making it susceptible to a Denial of Service (DOS) attack.
Remote Code Execution (RCE) in mintplex-labs/anything-llm
https://huntr.com/bounties/4f2fcb45-5828-4bec-985a-9d3a0ee00462
Impact: This vulnerability can allow attackers to remotely execute code on the server.
The vulnerability involves injecting malicious code into the LocalAiBasePath parameter which will write the code to a .env file. Through a string of other HTTP requests, this code can then be triggered leading to server takeover.