PAI-favicon-120423 MLSecOps-favicon icon3

The Time is Now to Protect AI

This has been quite the year for artificial intelligence (AI). Innovations in Generative AI are driving lightning fast adoption across industries. Goldman Sachs has forecast that AI could increase global output by 7%, or roughly $7 trillion, over a decade. The EU has approved the AI Act and the White House if drafting the AI Bill of Rights. AI is a critical driver of digital transformation, and we must protect it commensurate with the value it creates. As such, we founded this company with a mission to help customers build a safer AI-powered world by providing solutions to secure ML systems and AI applications. 

That mission is dramatically accelerated today. Protect AI is excited to share that we closed our Series A round of $35M led by Evolution Equity Partners with participation from Salesforce Ventures, Acrew Capital, boldstart ventures, Knollwood Capital and Pelion Ventures. We started Protect AI 18 months ago based on our experience being involved in the largest machine learning and AI deployments in the world during our time at Amazon, Oracle, and DataScience.com. We saw the value that AI can and does deliver, but also new risks inherent in these systems. This new capital speeds our mission and fortifies our position to lead this new market segment in security of AI & ML, which we call MLSecOps.

The past year has been one of innovation and growth for Protect AI. We have built an amazingly talented team of over 25 artificial intelligence and security experts, including our CISO Diana Kelley. We released and will maintain multiple, industry-leading, open-source initiatives such as NB Defense for Jupyter notebook security and the prompt injection detector Rebuff. To move the market with other fellow travelers and to educate AI/ML developers, we launched our MLSecOps educational community and podcast, which features some of the brightest minds in AI security, governance, and policy. We are achieving product and market fit for our security of ML systems and AI applications platform, called AI Radar, with early customers in financial services, healthcare, life sciences, AI foundational model suppliers, and MLOps platform providers. 

AI Radar is Protect AI's flagship product, and is the industry's first MLSecOps platform. Our platform addresses a critical challenge faced by even the most advanced companies – the lack of a comprehensive inventory of assets and elements in their ML systems. With the rapid increase in supply chain assets like foundational models and third-party training data sets, coupled with the dynamic nature of ML systems and data workflows, traditional security tools fall short in providing the needed visibility. This creates significant AI security challenges, putting organizations at risk in areas such as regulatory compliance, PII leakages, data manipulation, model poisoning, infrastructure protection, and reputational damage.

AI Radar enables organizations to deploy AI with enhanced safety by assessing the security of their ML supply chain and swiftly identifying and mitigating risks. It achieves this through real-time visibility and insights into an ML system's attack surface, generating and updating a tamper-proof ML Bill of Materials, and implementing security checks and remediations throughout the ML development lifecycle – from experimentation to inference. To witness AI Radar in action and learn more, schedule a demo for you and your teams at your convenience.

With the Series A funding, Protect AI will scale our R&D, go to market activities, and customer success initiatives. Our R&D investments will focus on three key areas; add additional capabilities to AI Radar, expand our research capabilities to find and report more critical vulnerabilities and exploits in the ML supply chain, and invest in our open-source projects. We are hiring across multiple roles, so come join the team!

I want to give a huge thanks to the entire Protect AI team, investors, and our customers! Together we are building a safer AI-powered world.