PAI-favicon-120423 MLSecOps-favicon icon3

boldstart welcomes Protect AI into its category-creating portfolio

Ed Sim, one of Protect AI's Board Members, and boldstart ventures' Founder & General Partner announces the investment in Protect AI to shift security left to the design phase of ML lifecycle.

Protect AI — MLSecOps, shift left security for AI/ML, welcome to boldstart

Read Ed's blog here: Protect AI — MLSecOps, shift left security for AI/ML, welcome to boldstart

For the last decade, boldstart has been a day one partner with technical founders creating new cybersecurity categories. This includes now established companies like Security Scorecard (security ratings), BigID (data privacy/security), and Snyk (developer first security) along with many new ones like Slim, Jit, and Reco. We’re thrilled to have Protect AI join this group as it’s creating a new category, MLSecOps, to shift security left to the design phase and help developers find and fix vulnerabilities across the entire ML lifecycle. We’re also excited to announce our day one partnership, co-leading Protect AI’s initial $13.5M round with our friends at Acrew Capital including Knollwood Capital, Pelion Ventures, Avisio Ventures, and experienced cybersecurity leaders Shlomo Kramer, Nir Polak, and Dimitri Sirota.

We first met Ian Swanson years ago when he was founder and CEO of DataScience.com, a platform for end-to-end ML development and operations, which was later acquired by Oracle. After Oracle, Ian went to AWS to lead go-to-market functions for the AI/ML business where his team helped thousands of organizations scale their AI operations. Through many customer deployments of MLOps platforms, Ian noticed that as the code repos and data sources for ML skyrocketed so did the number of security vulnerabilities. And when it came to AI/ML, security was overlayed after the fact, if at all, and required coordination from data scientists, developers, and security professionals. A new approach was clearly needed, one where security would be embedded into the design and build phase similar to the DevSecOps movement. Together with former colleagues Badar Ahmed (head of eng, DataScience.com), and D Dehghanpisheh (AWS, global leader AI/ML Solution Architects), Protect AI was born to set a standard for MLSecOps.

Protect AI’s mission is to build a safer AI Powered World by empowering a community of ML security researchers, finding unique exploits, and providing tools that reduce risk inherent in MLOps pipelines. MLSecOps puts security at the heart of ML design, enabling ML developers to continue rapidly innovating while protecting against unique ML vulnerabilities.

Protect AI’s initial product is NB Defense, the first security solution to find and fix vulnerabilities in Jupyter Notebooks, a core component used at the beginning of the machine learning supply chain. ML practitioners use notebooks to create and share documents that contain live code, equations, visualizations, data, and text. There are now over 10M publicly accessible notebooks, growing by 2M+ annually, with many more in private repositories, and the security risks are growing. Some of these risks include Jupyter Notebooks incompatible with existing static code analyzers, arbitrary code execution in serialized models, poisoned training data and model evasion using adversarial machine learning techniques. NB Defense is free to use, quickly scans a single notebook or a repository of notebooks for common security issues, identifies problems, and guides timely remediation. 

As a day one investor in Snyk, which helped create the developer first security experience, we believe the time is now for the same shift left ⬅️ experience for ML. As Snyk proved, there is no better way to build more secure applications than by building security into the design phase, and we’re excited to partner with Ian, Badar, D and the entire Protect AI team to help build a safer AI powered world.

Welcome to the Boldstart family!