First in a five-part series on implementing Secure by Design principles in AI system development
The AI Security Landscape Is Rapidly Changing
The age of generative artificial intelligence (GenAI) has arrived, and businesses are adopting the technology at bullet train speed across all sectors. Security teams are running alongside that train, working diligently to ensure corporate and customer data remains protected. Industries like financial services, biotech, and telecom—which have been using predictive machine learning for years—are now realizing that attackers are focusing more attention on what was once a little-exploited area of an organization's attack surface.
As artificial intelligence continues its rapid integration across critical infrastructure, enterprise systems, and consumer applications, the security landscape surrounding these technologies has evolved into a complicated transit complex. Protecting AI systems requires a fundamental shift in security thinking.
Understanding the Unique AI Attack Surface
Unlike traditional cybersecurity focused on networks and data, AI systems face unique vulnerabilities that exploit how these systems learn and operate. To effectively protect AI systems, we need to understand the specific ways attackers target them.
Data Poisoning Attacks
Attackers can secretly insert harmful inputs into training data, compromising AI systems before they're even deployed. Think of a company developing a content moderation AI that unknowingly trains on data containing subtle patterns, later causing the system to allow harmful content through. This resembles contaminating ingredients before a recipe is made rather than tampering with the finished dish.
Prompt Injection Attacks
AI systems are vulnerable to carefully worded inputs that can override safety measures. When your AI assistant visits a website containing hidden instructions, these commands might redirect the AI's behavior without your knowledge—modifying shopping orders, revealing sensitive information, or worse.
Model Deserialization Attacks
When AI models are packaged for storage or sharing, attackers can embed malicious code within them. When loaded by your application, this hidden code activates—like opening what appears to be a legitimate document that secretly installs malware on your computer.
Autonomous AI System Risks
Autonomous, agentic AI systems present additional risks due to their ability to make independent decisions and take actions. Imagine your AI home assistant interacts with a fraudulent website that embeds hidden commands in its responses. These instructions override the AI's safety protocols, causing it to secretly order unauthorized items when performing routine tasks—all without your knowledge or detection until significant damage occurs.
The Critical Need for Secure by Design
The evolving and expanding attack surface of AI systems underscores the critical need for a Secure by Design approach to AI security. Rather than treating security as an afterthought or a series of bolt-on protections, organizations must embed security considerations throughout the entire AI development lifecycle.
This proactive stance ensures that security controls are integrated into the system architecture from inception, addressing vulnerabilities before they can be exploited. However, secure design principles alone are insufficient.
Defense in Depth for AI Systems
A robust defense in depth (DiD) strategy complements the Secure by Design approach by implementing multiple layers of security controls. By establishing overlapping protective mechanisms—from data validation and model monitoring to runtime protection and incident response—organizations can create resilient AI systems capable of withstanding sophisticated attacks even when individual security measures fail.
Applying CISA's Secure by Design Framework to AI
The Cybersecurity and Infrastructure Security Agency (CISA) created its Secure by Design framework which establishes three foundational principles that have proven effective for building security into traditional software. These principles can and should be applied to AI systems development.
1/ Taking Ownership of Customer Security Outcomes
Organizations developing and deploying AI systems should ensure that security is included throughout the entire lifecycle. Security cannot be an afterthought or delegated solely to security teams; it must be a core commitment embraced by leadership and integrated into every phase of AI development.
In practice, taking ownership means establishing clear security requirements from the earliest design phases. The concept of Machine Learning Security Operations (MLSecOps) extends DevSecOps principles to machine learning workflows, addressing AI-specific vulnerabilities. As AI systems become increasingly autonomous, agentic, and complex, this ownership becomes even more important to prevent unwanted outcomes.
2/ Embracing Radical Transparency and Accountability in AI Security
Transparent AI systems allow stakeholders to understand security measures, data management controls, and potential vulnerabilities. Organizations should document and openly communicate how their AI systems are designed, trained, and protected.
This includes maintaining detailed records of training data and model provenance, model architecture decisions, and security control implementations. MLSecOps supports transparency by maintaining detailed documentation of ML model lineage and includes creating ML-specific bills of material (ML or AI-BOMs) that document not just code but also datasets, pretrained models, and frameworks used.
Accountability requires establishing clear metrics to evaluate AI security effectiveness, testing (e.g., red teaming), and regular assessments to identify potential weaknesses.
3/ Leading from the Top for AI Security
Effective MLSecOps requires organizations to lead from the top, embedding security throughout the AI development lifecycle. When executives support and champion security as a core value, it shapes every aspect of AI systems.
Board-level oversight of AI security initiatives is essential, going beyond traditional CISO risk updates. These reports should specifically address AI product security and its impact on customer protection. AI security professionals must have genuine authority to influence product investments and development priorities.
This represents a shift from merely seeking "executive buy-in" to establishing customer security of AI systems as a fundamental business objective that is led from the top through concrete actions and resource allocation.
Conclusion
The security challenges facing AI systems are both novel and complex, requiring a comprehensive approach that combines Secure by Design principles with Defense in Depth strategies. By understanding the unique attack vectors targeting AI and implementing CISA's three foundational principles, organizations can build AI systems that maintain integrity even when individual security measures fail.
In the next installment of this blog series, we'll explore how to implement these principles through specific frameworks and standards designed with AI security in mind.
Check out Part 2 of our series: Building Secure by Design AI Systems: A Defense in Depth Approach
Ready to dive deeper? Get the full white paper: Securing AI’s Front Lines: Implementing Secure by Design Principles in AI System Development