Skip to content

Join us at RSAC 2025

Protect AI Blog

Model Security
April 23, 2025

Introducing Guardian Local Scanning: Streamlined Model Security

At Protect AI, we're committed to empowering you to secure AI models seamlessly within your existing workflows. With that goal in mind, we’re pleased...
4 minute read
Read more
Model Security
April 23, 2025

Introducing Guardian Local Scanning: Streamlined Model Security

At Protect AI, we're committed to empowering you to secure AI models seamlessly within your...
4 minute read
Read more
Model Security
April 23, 2025

Implementing Advanced Model Security for Custom Model Import in Amazon Bedrock

Integrating generative AI into enterprise workflows unlocks tremendous innovation...
30 minute read
Read more
Red Teaming
April 23, 2025

Building Robust LLM Guardrails for DeepSeek-R1 in Amazon Bedrock

35 minute read
Read more
Secure by Design
April 22, 2025

Secure by Design for AI: A Real-World Healthcare Case Study

Fifth and final in a five-part series on implementing Secure by Design principles in AI system...
9 minute read
Read more
Secure by Design
April 16, 2025

Tools and Technologies for Secure by Design AI Systems

This is the fourth in a five-part series on implementing Secure by Design principles in AI...
10 minute read
Read more
Machine Learning
April 16, 2025

Machine Learning Models: A New Attack Vector for an Old Exploit

Machine learning (ML) has seen rapid adoption across industries, enabling advancements in...
6 minute read
Read more
Model Security
April 14, 2025

4M Models Scanned: Hugging Face + Protect AI Partnership Update

9 minute read
Read more
Cybersecurity
April 11, 2025

Security Spotlight : Embracing a Culture of Security at Protect AI

Young companies, growing fast, often consider convenience over security. Prioritizing speed...
6 minute read
Read more
LLM Security
April 8, 2025

MCP Security 101: A New Protocol for Agentic AI

In November 2024, Anthropic introduced Model Context Protocol (MCP), an open source...
9 minute read
Read more
Secure by Design
April 3, 2025

Securing Agentic AI: Where MLSecOps Meets DevSecOps

This is the third in a five-part series on implementing Secure by Design principles in AI...
12 minute read
Read more
Red Teaming
April 2, 2025

Qwen2.5-Max Vulnerability Assessment

Qwen2.5-Max beats DeepSeek-V3 on Security Assessment Brief The model used for this assessment,
15 minute read
Read more
Artificial Intelligence
March 27, 2025

The Expanding Role of Red Teaming in Defending AI Systems

This article was originally published by Ruchir Patwa for Tech News World.
5 minute read
Read more
Adversarial ML
March 27, 2025

A CISO’s Guide to Securing AI Models

This article was originally published by Diana for Help Net Security.
5 minute read
Read more
LLM Security
March 27, 2025

A Step-by-Step Guide to Securing LLM Applications

This article was originally published by Neal for the RSA conference website.
6 minute read
Read more
Secure by Design
March 26, 2025

Building Secure by Design AI Systems: A Defense in Depth

Second in a five-part series on implementing Secure by Design principles in AI system...
9 minute read
Read more
Secure by Design
March 26, 2025

The Evolution of AI Security: Why Secure by Design Matters

First in a five-part series on implementing Secure by Design principles in AI system...
6 minute read
Read more
Red Teaming
February 12, 2025

Automated Red Teaming Scans of Databricks Mosaic AI Model Serving Endpoints Using Protect AI Recon

We are thrilled to announce the integration of Protect AI’s Recon with Databricks Mosaic AI...
10 minute read
Read more
LLM Security
February 10, 2025

Breaking Down LLM Security: 3 Key Risks

Last week, Ruchir Patwa and Neal Swaelens sat down to discuss OWASP’s Top 10 for LLMs. (Didn’t...
6 minute read
Read more
Secure by Design
February 7, 2025

Secure by Design: Why Protect AI Signed CISA's Pledge

I've spent decades watching the cybersecurity landscape evolve, from the early days of viruses...
7 minute read
Read more
Model Security
January 28, 2025

Using Protect AI's Products to Analyze DeepSeek-R1

DeepSeek-AI has released an MIT licensed reasoning model known as DeepSeek-R1, which performs...
9 minute read
Read more
LLM Security
January 28, 2025

Why eBPF is Secure: A Look at the Future Technology in LLM Security

When discussing kernel-level technologies, many recall incidents like the CrowdStrike Falcon...
6 minute read
Read more
MLSecOps
January 8, 2025

MLSecOps: The Foundation of AI/ML Security

Written by Diana Kelley for SC World
4 minute read
Read more
MLSecOps
December 11, 2024

How To Secure AI With MLSecOps

Written by Diana Kelley for TechNewsWorld AI-driven systems have become prime targets for...
5 minute read
Read more
LLM Security
December 6, 2024

Layer’s agentless approach to securing enterprise LLM applications

Large corporations typically rely on cloud providers to serve their large language models,...
3 minute read
Read more
LLM Security
December 4, 2024

How Protect AI is shaping the future of LLM Security at runtime with eBPF

Introduction Since 2024, we’ve witnessed enterprises rapidly adopt LLMs and rush to secure...
4 minute read
Read more
Red Teaming
November 25, 2024

Why Automated Red Teaming is Essential for GenAI Security

The Unique Challenges of Securing GenAI Systems Generative AI (GenAI) has rapidly transformed...
11 minute read
Read more
Model Security
October 25, 2024

Supporting the safe and secure usage of the world's largest AI/ML Model Repository

Protect AI and Hugging Face Partner to Secure the Machine Learning Supply Chain Today, we’re...
5 minute read
Read more
AI ZeroDay
October 23, 2024

4 Ways to Address Zero-Days in AI/ML Security

Written by Dan McInerney for Dark Reading
5 minute read
Read more
LLM Security
October 8, 2024

Out of Line Threat Scanning for LLMs: Some Real-World Examples

Introduction Change is underway at OpenAI. In the wake of a significant leadership shift, the...
6 minute read
Read more
LLM Security
September 27, 2024

RAG Security 101

RAG and Secure Retrieval Imagine LLMs that are able to adapt instantly to new information,...
9 minute read
Read more
LLM Security
August 28, 2024

Why LLMs Are Just the Tip of the AI Security Iceberg

Written by Diana Kelley for Dark Reading
6 minute read
Read more
LLM Security
July 24, 2024

LLM Security: Going Beyond Firewalls

Introduction If 2023 was the opening act for LLMs at the enterprise, then 2024 will be the...
10 minute read
Read more
Red Teaming
July 3, 2024

The Crucial Role of the AI Red Team in Modern Cybersecurity

Introduction In the fourth chapter of navigating AI/ML security concerns, let’s explore...
6 minute read
Read more
Threat Intelligence
June 20, 2024

Navigating Vulnerabilities in the AI Supply Chain

Introduction In the second chapter of navigating AI/ML security concerns, let’s explore Adam...
6 minute read
Read more
Model Security
June 10, 2024

The Trojan Horses Haunting Your AI Models

In the fast-moving world of Artificial Intelligence (AI) and Machine Learning (ML), ensuring...
4 minute read
Read more
LLM Security
May 30, 2024

AI Agents: Chapter 3 - Practical Approaches to AI Agents Security

Introduction In our last article, we discussed how GPTs or AI Agents’ risk could be boiled...
7 minute read
Read more
Industry News
May 24, 2024

The role of cybersecurity in AI system development

Our CISO, Diana Kelley, was interviewed for Silicon Republic
4 minute read
Read more
Industry News
May 23, 2024

Does Your Company Need A Chief AI Officer?

Written by Ian Swanson for Forbes
6 minute read
Read more
LLM Security
April 24, 2024

AI Agents: Chapter 2 - The Thin Line between AI Agents and Rogue Agents

Introduction In our previous article, we dissected Open AI’s and the broader LLM vendor...
10 minute read
Read more
LLM Security
April 24, 2024

NEW to LLM Guard - Next Gen v2 Prompt Injection Model

Introduction Since the release of our first prompt injection detection model at the very end...
8 minute read
Read more
LLM Security
April 3, 2024

AI Agents: Chapter 1 - (Ground)breaking LLMs?

Introduction Back in November 2023, Open AI’s dev day was a great leap forward in abstracting...
5 minute read
Read more
LLM Security
March 13, 2024

Hiding in Plain Sight: The Challenge of Prompt Injections in a Multi-Modal World

Introduction The announcement of Open AI’s Sora model which can create imaginative scenes from...
4 minute read
Read more
LLM Security
March 5, 2024

Preventing LLM Meltdowns with LLM Guard

Introduction Recently, we saw a surge in incoherent responses to user prompts in ChatGPT by...
5 minute read
Read more
MLSecOps
March 5, 2024

How MLSecOps Can Reshape AI Security

Written by Ian Swanson forForbes
8 minute read
Read more
LLM Security
February 21, 2024

Advancing LLM Adoption and Enhancing Security Against Invisible Prompt Injections with LLM Guard

Introduction As 2024 unfolds, it's shaping up to be a big year for LLM adoption as well as its...
5 minute read
Read more
Model Security
January 23, 2024

How To Use AI/ML Technology Securely with Open-Source Tools from Protect AI

If you are not already working on a project leveraging some form of AI or ML, the odds are...
12 minute read
Read more
Adversarial ML
January 16, 2024

A CISO’s perspective on how to understand and address AI risk

Written by Diana Kelley for SC Media
6 minute read
Read more
Adversarial ML
January 10, 2024

Adapting Security to Protect AI/ML Systems

Written by Dan McInerney for Dark Reading
7 minute read
Read more
PAI Updates
December 15, 2023

Protect AI Named on the Fortune Cyber60 List

On December 14th, we announced we were named into the inaugural edition of the Fortune Cyber...
2 minute read
Read more
PAI Updates
December 12, 2023

Protect AI CEO, Ian Swanson, Delivers Testimony In Congressional Hearing on AI Security

Introduction Today, I was honored to have the opportunity to deliver testimony at a...
7 minute read
Read more
PAI Updates
August 3, 2023

Announcing ModelScan: Open Source Protection Against Model Serialization Attacks

Introduction Machine Learning(ML) Models are shared over the internet, between teams, and are...
11 minute read
Read more
PAI Updates
July 26, 2023

The Time is Now to Protect AI

This has been quite the year for artificial intelligence (AI). Innovations in Generative AI...
4 minute read
Read more
Industry News
June 15, 2023

Alphabet Spells Out AI Security

What we’re reading Last week, Alphabet's Google spelled out their view of artificial...
5 minute read
Read more
Threat Intelligence
June 6, 2023

Secure Your Python Projects with Dummies

TL;DR If you use a private package index for a Python library in development, ensure that it...
7 minute read
Read more
Threat Intelligence
June 5, 2023

Hacking AI: System Takeover in MLflow Strikes Again (And Again)

2 patch bypasses found for severe MLflow LFI/RFI vulnerability All patched in MLflow version...
13 minute read
Read more
Industry News
May 25, 2023

What’s Old is New - Natural Language as the Hacking Tool of Choice

What we’re reading We came across Daniel Miessler’s excellent blog and framework, The AI...
5 minute read
Read more
Industry News
May 16, 2023

A Tale of Two LLMs - Safety vs. Complexity

What we’re reading In this hypersonic era of “I need a bot for that!”, we are fascinatedby...
4 minute read
Read more
Industry News
May 8, 2023

Blog Byte: Spherical Steaks in ML. “Say what?!”

What we’re reading Sven Cattell, the President for AI Village posted a great article called...
3 minute read
Read more
Employee Spotlight
March 31, 2023

Employee Spotlight: Josh Miles

We’re excited to celebrate, Josh Miles, a senior software engineer and an early member of...
3 minute read
Read more
Employee Spotlight
March 30, 2023

Employee Spotlight: Dan McInerney

We’re excited to announce our newest hire, Dan McInerney who will lead our security team as...
2 minute read
Read more
Employee Spotlight
March 30, 2023

Employee Spotlight: Faisal Khan

Shoutout to Faisal Khan, one of our Senior Software Engineers, and Protect AI's first recruit!...
2 minute read
Read more
MLSecOps
March 13, 2023

Hacking AI: System and Cloud Takeover via MLflow Exploit

CVE-2023-1177: LFI/RFI in MLflow LFI/RFI leading to system and cloud account takeover All...
23 minute read
Read more
Threat Intelligence
March 7, 2023

AI Zero Day Found in MLflow

AIØD: MLflow Could Expose Entire ML System TL;DR: MLflow versions prior to 2.2.2 have AI...
9 minute read
Read more
Threat Intelligence
March 6, 2023

Hacking AI: Steal Models from MLflow, No Exploit Needed

No Authentication Can Lead To Problems MLflow comes with no authentication out of the box...
10 minute read
Read more
PAI Updates
February 27, 2023

NB Defense Now in Public Beta

NB Defense is a powerful Jupyter Lab Extension that enables data scientists to implement Shift...
11 minute read
Read more
PAI Updates
November 16, 2022

Why We Are Building Protect AI

I’ve spent much of my career scaling Artificial Intelligence and Machine Learning (AI/ML)...
3 minute read
Read more
PAI Updates
November 16, 2022

Announcing NB Defense: The Starting Point of ML Security

Introduction Prior to joining Protect AI as Head of Product, I worked at AWS as a Solutions...
18 minute read
Read more
MLSecOps
October 21, 2022

AI Zero Days: Why we need MLSecOps, now.

Day 1… Customers have been the focus of my career in almost every role I have had. Prior to...
8 minute read
Read more

Are You Ready to Start Securing Your AI End-to-End?