<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4373740&amp;fmt=gif">

Blog

Why We Are Building Protect AI

Dec 15, 2022 Ian Swanson 2 min read

Machine Learning and Security Feature Image

I’ve spent much of my career scaling Artificial Intelligence and Machine Learning (AI/ML) while leading some of the largest AI teams in the technology industry. As the founder of DataScience.com I wanted to make it easier for enterprises to practice and scale AI/ML use cases in production. Our success in that endeavor led to Oracle buying the company and acquiring the platform, giving it a best of breed solution in the then emerging field of Machine Learning Operations (MLOps). In fact, DataScience.com capabilities are still available. 

Afterwards, I led go-to-market functions for the AI/ML business across the entire Amazon Web Services portfolio with teams reaching thousands of customers a month who were deploying ML models and AI applications across their enterprises. Together, my teams and I helped thousands of customers scale ML systems. That gave me a sense of what was needed to accelerate adoption of AI in all types and sizes of businesses. 

It’s no surprise that AI sits at the core of many business operations and capabilities today, and will only continue to accelerate. Much of this acceleration can be credited to adoption of MLOps: the practice of streamlining the methods of taking ML models to production by providing the processes, governance and procedures for maintaining and monitoring AI/ML systems at scale. 

But, what my teams and I observed was a critical missing capability around providing comprehensive security practices that could handle the cyber threats of today while addressing the unique needs of ML code. It was apparent that ML required a different approach to security, and that MLOps needs a similar evolution to that which happened in DevOps. 

The evolution of DevOps to DevSecOps involves incorporating security into workflows of tools and processes, instead of treating security as an afterthought.  It creates a sense of shared responsibility for maintaining highly secured code. My team and I think MLOps needs the same evolution, which means moving from MLOps to MLSecOps. 

Our hypothesis for this transition is simple. We don’t believe that MLOps sufficiently safeguards the security risks that exist in ML code. As MLOps scales AI/ML systems in capabilities, code repos, and data sources, opportunities for security incidents increase and new vulnerabilities are created in an enterprise’s ML supply chain. Because ML is complex, and the pipelines that help organizations deliver it at scale inherit that complexity, blind spots are unintentionally created to traditional cybersecurity offerings and unseen risks are prevented from being adequately understood and mitigated. Today, the largest cybersecurity enterprise software companies in the world do not offer security solutions for AI/ML systems.

Enter, Protect AI.

Our vision is to build a safer AI-powered world. Our mission is to set the standard for MLSecOps with a community and the unique ML security tools needed to give customers the confidence to quickly innovate enterprise AI applications and defend against ML zero-day attacks. We are starting that journey today with a free offering for anyone practicing data science and ML engineering that helps practitioners run security scans on Jupyter Notebooks, called NB Defense. In the coming months, we will add security capabilities across the entire ML lifecycle. We hope you will join us, as we shape the evolution to MLSecOps - helping every AI system be more valuable than ever, while being as safe as possible. 

Sincerely, 

Ian

Share This: