PAI-favicon-120423 MLSecOps-favicon icon3

Protect AI CEO, Ian Swanson, Delivers Testimony In Congressional Hearing on AI Security

Today, I was honored to have the opportunity to deliver testimony at a congressional hearing on "Considering DHS' and CISA's Role in Securing Artificial Intelligence."

This was an important moment for Protect AI and the industry. We firmly believe in the need to protect AI equivalent to the immense value it delivers for countless businesses, and the massive impact Generative AI will deliver in the future. Against the backdrop of increased regulation from the White House to the European Parliament, more front-page headlines on AI/ML security risks, and proliferation of AI/ML enabled tech to deliver business value, the need for securing AI/ML applications has never been greater. 

The Time is Now to Protect AI

Organizations have been building and deploying AI globally for more than a decade, a trend that predates the recent surge in LLMs and generative AI applications. Both global businesses and the United States Government use a significant number, (likely in the millions), of Machine Learning (ML) models for critical processes, ranging from defense systems to administrative task acceleration. 

Knowing this, I posed a critical question to the committee: If they were to request a comprehensive inventory of all ML models in use in an enterprise or a USG agency, detailing their stages in the life cycle (including experimentation, training, or deployment), the data they process, and the personnel involved (both full time employees, government personnel, and contractors), would any witness, business, or agency be able to furnish a complete and satisfactory response? 

We believe the answer is “No.” Here’s why.

Businesses and Governments Must See, Know and Manage Risk in Their AI

Most AI applications and ML systems are highly dispersed and are typically reliant on very common open-source assets integral to the AI/ML lifecycle. This situation sets the stage for a major security vulnerability, akin to the 'SolarWinds incident'. The impact of such a breach on AI systems in the government could be enormous and even more difficult to quantify, fix, and investigate in an AI/ML environment. This highly complex technology domain is at risk, and new threats exist. Organizations today do not have the ability to do three critical things, in order to manage security risks, deploy safer applications, and be more resilient after an attack occurs. 

These 3 things are:

  • You can't SEE the threats. AI/ML systems are fragmented, complex and dynamic. This creates hidden security risks that escape your current application security governance and control policies. 
  • You don't KNOW where the threats lie. The rapidly evolving adoption of AI/ML adds an entirely new challenge for businesses to ensure their applications are secure and compliant. Safeguarding against a potential "SolarWinds" moment in ML is business critical. Manufacturers and consumers of AI need to know where threats lie in their ML system so they can pinpoint and remediate risk. 

  • AI threats are difficult to MANAGE. AI/ML security vulnerabilities are difficult to remediate. When operational, technological, and/or reputation security risks are identified that could harm customers, employees, and partners, the business must quickly respond and mitigate them to reduce incident response times. 

The Need for an ML-BOM

In order to overcome these challenges, an ML Bill of Materials (ML-BOM) is needed for comprehensive visibility and auditability of AI. We advised the committee that US businesses and federal government agencies should begin implementing the practice of MLSecOps, and start by urging the government, its contractors, and regulated industries to adopt an ML-BOM. This will help them identify risks, and establish a systematic way to gain the visibility needed to recognize threats deep inside their ML systems and AI applications, quickly. They will also know where the risks are via this ML-BOM, enabling them to scan and remediate their AI/ML systems, models, and tools for unique AI vulnerabilities. Finally, they will be able to manage the risk, by coupling the ML-BOM with documented policies to help improve security postures, employ incident response management processes, and enforce human-in-the-loop checks to meet existing and future regulatory requirements.

Building Secure AI by Design with Protect AI

More AI regulation is imminent, and organizations must start preparing now to ensure that they have the ability to stay compliant, and can manage AI risk appropriately. We founded this company with a mission to help customers build a safer AI-powered world, by providing solutions to secure ML systems and AI applications. From our inception, we've actively collaborated with customers, experts, and researchers, diligently pinpointing potential threats within the ML software supply chain. Since then we have been committed through our products and programs to build purpose built solutions to help organizations see, know, and manage their AI risk. 

First, we have Radar. Radar is the industry’s most advanced offering to secure and manage AI risk. With integrated security checks, it provides visibility and auditability into the ML attack surface by creating an ML-BOM. Radar includes an advanced policy engine to enable efficient risk management across regulatory, technical, operational and reputational domains. 

We are also actively developing, maintaining, and releasing first of a kind OSS tools built for AI/ML security, designed to raise awareness and provide practical, accessible solutions for managing AI system risk. These tools include, NB Defense for Jupyter notebook security, ModelScan for model artifacts, and Rebuff for LLM Prompt Injection Attacks. 

Finally, huntr, the world's first AI/ML bug bounty program. huntr provides a single place for security researchers to submit vulnerabilities found in Open Source AI tools and frameworks. This ensures the security and stability of AI/ML applications, and provides the world with critical intelligence on AI vulnerabilities and how to fix them. 

In Closing….

Until now, in order to stay competitive and drive innovation through AI, organizations primarily have concentrated on speed and accuracy in deploying ML models and AI applications, often overlooking the necessary business policies, processes and procedures to ensure they are also secure. ML models are invaluable assets within organizational infrastructures and production software, and must be protected in the same way as any other asset. We look forward to continuing to work with the US government to propose ideas and solutions that will help drive government and industry adoption of MLSecOps practices, which can be enhanced by new technical standards and sensible governance requirements. You can watch the full hearing here.