Stay Secure Without Disrupting AI Innovation
The use of ML Models without security checks puts your enterprise at risk. ML Models can contain malicious code and then be downloaded and deployed in your environment, with no policy or controls in place to scan for hidden risks that can be exploited by internal and external cybersecurity criminals.
Guardian continuously scans both third and first party models for security threats, before the model is utilized. This adds a critical layer of security to ML model development and deployment, so you can continue AI exploration and innovation with confidence.
Enforce Model Security
AppSec teams can protect against threats with enterprise wide policies to enforce the use of secure models within the organization and improve security posture.
Ease of Integration
It integrates easily with existing MLOps workflows, to keep pace with AI driven innovation.
Comprehensive Model Scanning
Guardian goes beyond traditional malware scanning tools and detects malicious code in ML models early in development and deployment.
Secure Your ML Models From Unique Threats
Enterprise-Grade Security Capabilities
Guardian ensures only secure and compliant models are utilized in enterprise networks with secure gateways and configurable policies.
Innovative Model Scanning
Guardian’s proprietary vulnerability scanners detect malicious code in new and existing AI/ML artifacts saved in popular ML storage formats.
Aggregated Insights
Guardian includes executive level dashboards that aggregate information on what open source models are being used, if they are secure, who created them, how they are licensed, and more.
Data Integration
Easily integrates with Protect AI’s end-to-end visibility product, Radar, to understand all AI/ML threat surfaces in your organization.
Key Features
-
Secure Gateway
Secure your ML model supply chain with Guardian’s endpoint to capture and check the MLOps team’s model requests before delivery.
-
Supports Leading Model Formats
Guardian supports Pytorch, Tensorflow, XGBoost, Keras, and other popular frameworks.
-
Policy Engine
Guardian comes with out of the box policies for model acceptance or rejection, including the enforcement of checks for where the model is from, malicious code, safe storage formats, OSS licensing and more.
-
Automated Scanning
Embed Guardian's Scanner API in your development and deployment pipelines to scan models before they are distributed, ensuring they are safe to use.
-
Secure Gateway
Secure your ML model supply chain with Guardian’s endpoint to capture and check the MLOps team’s model requests before delivery.
-
Supports Leading Model Formats
Guardian supports Pytorch, Tensorflow, XGBoost, Keras, and other popular frameworks.
-
Policy Engine
Guardian comes with out of the box policies for model acceptance or rejection, including the enforcement of checks for where the model is from, malicious code, safe storage formats, OSS licensing and more.
-
Automated Scanning
Embed Guardian's Scanner API in your development and deployment pipelines to scan models before they are distributed, ensuring they are safe to use.
Guardian vs PAI Open Source Comparison
Feature | Guardian | PAI Open Source |
---|---|---|
Detects serialization attacks in model formats | 17 | 9 |
Detects serialization embedded in model layers |
|
|
Detects model architectural backdoors in file formats without serialization risks |
|
|
Block access to models that don’t comply with policies |
|
|
SDK support for easy integration in CI/CD or model pipelines |
|
|
Policy management for all models |
|
|
Integration into model registries |
|
|