PAI-favicon-120423 MLSecOps-favicon icon3

Protect AI Named Cutting Edge Machine Learning Security Operations (MLSecOps) Solution at RSA Conference 2024

Protect AI Named Cutting Edge Machine Learning Security Operations (MLSecOps) Solution at RSA Conference 2024

Company recognized by Cyber Defense Magazine in Global InfoSec Awards for protecting AI/ML systems from security risks 

SAN FRANCISCO, RSA CONFERENCE, May 7, 2024 -- Protect AI, the leading artificial intelligence (AI) and machine learning (ML) security company, today announced that it was voted Cutting Edge Machine Learning Security Operations (MLSecOps) Solution in the Cyber Defense Magazine (CDM) 2024 Global InfoSec Awards. Winners were announced here, and the company will demonstrate the Protect AI platform at Booth 1655 in the Moscone South Expo.

“Being named Cutting Edge MLSecOps Solution at the RSA Conference by the Global InfoSec Awards further reinforces Protect AI’s status as the premier platform in AI/ML security,” said Ian Swanson, CEO of Protect AI. “Protect AI offers unmatched capabilities, enabling enterprises to build, deploy, and manage AI applications that are not only secure and compliant but also operationally efficient.”

“After an extensive global search for cybersecurity innovators who can make a significant impact and help secure AI/ML systems, Cyber Defense Magazine has recognized Protect AI as a top choice for deployment in any environment,” said Yan Ross, Cyber Defense Magazine’s Global Editor. “Protect AI stands out for enabling organizations to see, know and manage security risks so they can build and manage secure-by-design ML systems and AI applications, without adding time to market, cost or complexities.”

Cyber Defense Magazine has honored InfoSec innovators from across the globe for the past twelve years. Winners were selected by certified security professionals holding CISSP, FMDHS, and CEH certifications.

Protect AI’s end-to-end AI/ML security platform includes:

Radar is a comprehensive solution for AI security posture management, providing organizations with end-to-end visibility across their entire ML supply chain, including models, data, AI applications, and ML pipelines. Customers have the ability to quickly identify and mitigate risks. Protect AI’s vendor-neutral approach ensures compatibility across all ML vendors and tools, facilitating deployment in diverse environments. The inclusion of an AI/ML-BOM and a robust policy engine facilitates audits of ML systems and enforces security policies, rendering ML systems transparent and governable.

Guardian acts as a secure model gateway, ensuring the integrity and safety of first and third party models by continuously scanning for malicious code and other policy violations before they enter or are used in a customer's environment. This preemptive security measure is crucial for safeguarding against the introduction of vulnerabilities through public repositories like Hugging Face, GitHub, and TensorFlow Hub, as well as private model registries. By performing security scans as part of the CI/CD process, Guardian ensures that only secure models are deployed in a customer's environment. 

Sightline is the industry’s first AI/ML vulnerability database and threat feed. Drawing from Protect AI’s threat research community Huntr, Sightline provides unique insights into AI/ML vulnerabilities, exploits, and remediations, coupled with red teaming scripts, all while providing early alerts and context an average of 30 days or more in advance of vulnerabilities being published in the National Vulnerability Database. Sightline facilitates a shared knowledge base for the prevention and mitigation of AI/ML-specific threats.

LLM Guard - is a large language model (LLM) and Generative AI security tool that acts as a firewall that scans both inputs and outputs to block threats in ChatGPT, Anthropic, Gemini and other LLMs. 

huntr - the world’s first AI/ML bug bounty platform focused on protecting AI/ML open-source software (OSS), foundational models, and ML Systems. Protect AI’s research team and the huntr community constantly find vulnerabilities in the tools used to build AI applications and report monthly on critical vulnerabilities and their remediation. 

MLSecOps Community - since its inception, has been the premier hub for AI-Security educational resources and knowledge sharing. 

About Protect AI

Protect AI is the broadest and most comprehensive platform to secure your AI. It enables you to see, know, and manage security risks to defend against unique AI security threats, and embrace MLSecOps for a safer AI-powered world. Protect AI’s Platform provides visibility into the AI/ML attack surface, detects unique security threats, and remediates vulnerabilities. Founded by AI leaders from Amazon and Oracle, Protect AI is funded by Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures and Salesforce Ventures. The company is headquartered in Seattle, Washington.


For more information visit us on the web, and follow us on LinkedIn and X/Twitter

Media Contact:

Marc Gendron

Marc Gendron PR for Protect AI


# # #