PAI-favicon-120423 MLSecOps-favicon icon3

Protect AI Accelerates MLSecOps

Protect AI introduces security earlier in the ML model development lifecycle with its free product for Jupyter Notebooks, NB Defense.

Read More Here:

Protect AI Accelerates MLSecOps 

Cloud Computing Magazine | Protect AI Accelerates MLSecOps by Greg Tavarez

Machine learning practitioners are familiar with Jupyter Notebooks. Notebooks are used to create and share documents that contain live code, equations, visualizations, data and text. It’s a common-practice step at the beginning of the machine learning supply chain.

Cybersecurity company Protect AI found, within public Jupyter Notebooks, examples of secrets being exposed, personally identifiable information leaked and critical vulnerabilities that could be exploited by an attacker. Cybersecurity solutions obviously step in, right? Well, that is not the case as current cybersecurity solutions do not provide coverage of Jupyter Notebooks.

The gap in coverage likely contains vulnerabilities that are hidden, creating zero-day exploit risks. So, with more organizations moving ML into production environments, a solution is needed to address vulnerabilities in Jupyter Notebooks. Protect AI is providing that solution with $13.5 million seed funding and its first product, NB Defense.

“ML developers and security teams need new tools, processes and methods that secure their AI systems,” said Ian Swanson, co-founder and CEO, Protect AI. “We are launching a free product that helps usher in this new category of MLSecOps to build a safer AI-powered world, starting now.”

To add to what Swanson said, MLSecOps – machine learning + security + operations – is a newer practice in application security that involves introducing security earlier in the ML model development lifecycle.

NB Defense works across MLOps tools, meeting enterprises where they do ML today. It creates a translation layer from traditional security capabilities to enable scans of Jupyter Notebooks, then communicates findings back natively in the notebook or through reports with context specific links to problematic areas within the notebook for remediation.

NB Defense security scans of a notebook check for:

  • Common vulnerabilities and exposures in ML open-source frameworks, libraries and packages
  • Authentication tokens and other credentials over a host of services and products
  • Non permissive licenses in ML open-source frameworks, libraries and packages
  • Sensitive data and personally identifiable information

“It was vital that we built NB Defense to work with all of these platforms, meeting their data scientists where they work, empowering them to improve the security posture of their workloads without curbing their productivity or creativity,” said Chris King, Head of Product, Protect AI. “Securing a notebook is just the first step.”

NB Defense is available under a free license. Users can install NB Defense and use the JupyterLab Extension or Command Line Interface. The product is also designed to be embedded in ML development workflows with pre-commit hook support that allows a user to run a scan before any changes enter a repository.