In the second chapter of navigating AI/ML security concerns, let’s explore Adam Nygate’s lightning talk at last month’s annual RSA Conference on Vulnerabilities in the AI supply chain. In this video, Adam sheds light on the unique vulnerabilities in the AI supply chain and highlights how they differ from traditional software security risks. With AI revolutionizing industries, understanding and fortifying this supply chain is more important than ever. You can watch the full talk in the video below.
The AI supply chain is not just an extension of the traditional software supply chain, it’s a whole new beast with its own set of challenges and vulnerabilities. Unlike traditional software, the AI supply chain encompasses three main asset classes: datasets, models, and software. These components are intricately interwoven, creating a complex ecosystem that presents unique security challenges.
Datasets are the fuel that powers AI. They come in three flavors:
Proprietary Datasets: Sensitive data like patient records in healthcare.
Third-Party Open-Source Datasets: Large datasets from third party repositories like Kaggle that are freely available online.
User-Provided Data: Real-time inputs from users that continuously refine AI models.
Models can be divided into:
First-Party Models: Custom-built from scratch using proprietary or third-party data.
Third-Party Models: Ready-made models from platforms like Hugging Face, which can be fine-tuned for specific applications.
This category includes everything from proprietary software to open-source tools, APIs, and ML frameworks essential for data access, experimentation, and model deployment.
Several new attack vectors have emerged in the AI landscape targeting AI models and datasets such as:
Unlike traditional software, AI-specific software presents distinct security challenges. For example, AI models are like black boxes, making it nearly impossible to detect and fix malicious alteration. Additionally, AI models often encapsulate vast amounts of intellectual property in a single file, and AI systems typically have privileged access to highly sensitive information making them attractive targets for cybercriminals. Moreover, due to rapid Open-Source development, many AI tools aren't built with enterprise-grade security in mind.
To address these security challenges we need cutting-edge tools to tackle vulnerabilities with innovative solutions. Protect AI offers solutions like Guardian, a tool that scans both first-party and third-party AI models for vulnerabilities during download and deployment ensuring they are secure. With Protect AI’s Sightline, the world's first database dedicated to AI and ML supply chain vulnerabilities, you can receive early warnings and detailed security advisories about vulnerabilities discovered by Protect AI’s huntr community, a global network of 15,000 security researchers. Additionally, Sightline employs Nuclei templates to scan AI infrastructure, identifying actual exploitable vulnerabilities and reducing false positives. This approach ensures that companies can take immediate, effective action to secure their AI assets.
As artificial intelligence becomes more integrated into different aspects of business and society, it is crucial to prioritize AI security just as much as AI development. By staying informed about emerging threats and leveraging specialized tools like those offered by Protect AI, organizations can more effectively protect their AI assets and the sensitive data they handle. Because in the world of AI, the biggest threat might just be the one you can't see coming.