Stay secure without disrupting AI innovation
The security of Foundational Models hosted on public repositories can put an enterprise at risk. Models with malicious code can be downloaded and deployed in your environment, with no policy or controls in place to scan them for security exploits.
Guardian scans models from public repositories for malicious code, before the model is delivered. This adds a critical layer of security prior to using or fine tuning ML models, so you can continue AI exploration and innovation with confidence.
Enforce Model Security
AppSec teams can protect against exploits and attacks by enforcing the use of secure models within the organization.
Detect Malicious Code
Detect malicious code in ML models early in development, going beyond traditional malware scanning tools.
Seamlessly Integrate
Integrates easily with existing MLOps workflows, to keep pace with AI driven innovation.
Protect your AI from ML models containing malicious code
Enterprise-grade security gateway
Guardian acts as a secure gateway, bridging ML deployment and AppSec processes, to enforce that only secure and policy compliant models are utilized in your network.
Innovative model scanning
Guardian has an innovative scanner that goes beyond traditional malware scanning tools. It is purpose built to scan popular ML storage formats for malicious code.
Aggregated Insights
Guardian includes executive level dashboards that aggregate information on what open source models are being used, if they are secure, who created them, how they are licensed, and more.
Data Integration
Seamlessly integrates with existing security processes and Protect AI’s end-to-end visibility product, Radar, to understand all AI/ML threat surfaces in your organization.
Key Features
-
Reverse Proxy
Guardian's endpoint captures model requests internally created by MLOps teams and processes requested models through security checks before delivery.
-
Automated Scanning
Guardian introspects ML models in-memory without running them. This ensures they are scanned for malicious code and exploits as they are requested during the ML development process.
-
Policy Engine
Enforce OSS license permissibility checks. Create refined security and compliance policies for model acceptance or rejection.
-
Supports Leading Model Formats
Guardian supports Pytorch, Tensorflow, XGBoost, Keras and other popular frameworks.
-
Reverse Proxy
Guardian's endpoint captures model requests internally created by MLOps teams and processes requested models through security checks before delivery.
-
Automated Scanning
Guardian introspects ML models in-memory without running them. This ensures they are scanned for malicious code and exploits as they are requested during the ML development process.
-
Policy Engine
Enforce OSS license permissibility checks. Create refined security and compliance policies for model acceptance or rejection.
-
Supports Leading Model Formats
Guardian supports Pytorch, Tensorflow, XGBoost, Keras and other popular frameworks.