Protect AI and Hugging Face Partner to Secure the Machine Learning Supply Chain

Today, we’re excited to announce our partnership with Hugging Face to help secure the AI/ML supply chain for the community. Together, Protect AI and Hugging Face are building on our shared commitment to open-source and AI security by helping enable enterprises to adopt and innovate with AI confidently. By advancing the security of AI models, this partnership sets a new standard for securing AI, and will play a major role in promoting the safe and secure adoption of AI across industries, while also furthering the democratization of AI by making secure, reliable models accessible to everyone. 

Providing Security Scans for Over 1 Million Machine Learning Models

Hugging Face is the leading force in democratizing artificial intelligence (AI) and hosts the world’s largest repository of publicly available machine learning (ML) models — the Hugging Face Hub. As ML models become integral to critical business operations, ensuring that these models are secure is crucial for preventing breaches, data leaks, and decision-making errors. By adding Protect AI’s Guardian scanner, every model on the Hub, whether developed by a large tech company or an independent researcher, will be subject to a robust security analysis, which in turn, provides comprehensive security alerts and deep insights into the safety of foundational models before use.

 

Screenshot 2024-10-21 at 1.33.36 PM

Real-Time Security Reports with Insights DB

As part of this announcement b we’re launching Protect AI’s Insights DB, a dynamic repository of security reports that provides information about the types of threats we scan for, how these threats manifest in practice, and detailed results from our scans. Insights DB is a vital educational resource that helps enterprises not only understand the security and safety of a model, but gain crucial knowledge on the specific risks associated with detected threats. 

In addition to viewing model-specific reports, users can browse the database to see a list of models that are deemed safe to use, or alternatively, report potential threats that may have been missed by joining our Threat Research community – huntr. Our bug bounty program rewards those who discover previously unreported vulnerabilities, ensuring that our coverage remains as comprehensive as possible.

 

PAI-Insights DB-Press-101624

Understanding the Unique Risks in the Machine Learning Supply Chain

The risks we’re addressing with this partnership differ fundamentally from those in the traditional software supply chain. The rise of new methodologies like MLOps and MLSecOps—which are the machine learning equivalents of DevOps and DevSecOps—has introduced a new set of supply chain assets, such as models and datasets, which come with their own security challenges.

One such example is from our research paper, Model Files are Invisible Viruses, published earlier this year. In it, our Threat Research team demonstrated how malicious actors can exploit the way ML models are deserialized. This vulnerability allows attackers to hide malicious code within the model itself, posing a serious risk to any system that integrates the compromised model. 

Scanning Models with Guardian

Guardian is Protect AI’s flagship product designed to scan AI/ML model artifacts for threats at every stage of their lifecycle. These threats include model serialization attacks, known CVEs, backdoors, and more. By incorporating both first-party and crowdsourced threat research from our vast community of 15,000+ hackers and open-source maintainers, known as the huntr community, we are able to detect and respond to emerging threats at scale.

How Guardian Works

  • Comprehensive Model Scanner: Guardian automatically scans models for vulnerabilities at various points in the AI/ML lifecycle. From initial download by developer to deployment monitoring, it ensures that models are secure before they are used.

  • Policy Enforcement: More than just a scanner, Guardian enforces security policies, ensuring that models failing security checks are blocked from entering production environments. This means enterprises can confidently adopt models without worrying about unvetted risks.

Building a Safer AI-Powered World Together

This partnership with Hugging Face is a monumental step forward in our mission to build a safer AI-powered world. By adding our advanced security scans into the Hugging Face Hub, elevating Hugging Face as a top tier partner on Huntr, and serving Insights DB to the community, we’ve made significant steps to secure the AI/ML supply chain. 

Ensuring the security of your AI models is paramount in today's threat landscape. With Guardian, you have access to the most comprehensive model scanner and security enforcement tool available, helping protect you from unique AI vulnerabilities without compromising your enterprise's AI-driven innovations.

Explore how Guardian can safeguard your machine learning models by requesting a demo or speaking with one of our team today.