Security Spotlight: AppSec to AI, a Security Engineer's Journey
As an application security engineer with over a decade in the trenches of web applications, APIs, and enterprise systems, I never expected my career path would lead me to the frontier of artificial intelligence security. Yet here I am, finding myself both fascinated and challenged by the unique security considerations of AI systems.
This post is about how I apply a traditional AppSec lens to AI security and think about risks in this new landscape. There are many familiar patterns, but also ever evolving and surprising differences.
The Familiar Territory: Where AppSec Meets AI Security
When I first approached AI security, I was relieved to discover that many fundamental principles from traditional application security directly apply to AI systems. After all, AI applications are still software applications with many of the same underlying components.
In traditional AppSec, input validation has always been a cornerstone of security. We've spent years defending against SQL injection, XSS, and command injection attacks. In AI systems, particularly large language models (LLMs), we face a conceptually similar but technically distinct challenge: prompt injection. Just as a maliciously crafted SQL query can compromise a database, a cleverly designed prompt can manipulate an AI model to bypass guardrails or produce harmful outputs. The core principle (never trust user input) remains the same, but the implementation of defenses requires new approaches specific to AI systems.
Authentication and Authorization
Authentication (verifying who you are) and authorization (determining what you can do) remain fundamental in AI security. However, AI systems introduce new dimensions:
- Model-level access controls for different capabilities
- Fine-grained permissions for training data access
- Authorization boundaries between model features
These concepts map directly to traditional role-based access control and least privilege principles, but with AI-specific implementation details.
Data Protection
In traditional security, we've always been concerned with protecting personally identifiable information (PII) and sensitive data like encrypted passwords. In AI security, this concern extends to training data protection and model weights and parameters.
The stakes are arguably even higher with AI, as leaked training data or model weights can compromise not just individual records but entire systems.
The New Frontier: Unique Challenges in AI Security
Despite the familiar territory, AI security introduces entirely new categories of risks that traditional AppSec frameworks don't adequately address.
Model Attacks
Traditional application security focuses primarily on code-level vulnerabilities, infrastructure weaknesses, and configuration issues. AI security must additionally consider:
- Model poisoning attacks: Adversaries tampering with training data
- Membership inference attacks: Determining if specific data was in the training set
- Model inversion: Extracting training data from model responses
- Adversarial examples: Inputs specifically designed to cause model failures
These attack vectors simply don't exist in traditional applications, requiring security engineers to develop new mental models and defensive strategies such as model scanning and runtime threat detection.
Supply Chain Risks
While traditional AppSec considers software supply chain risks through vulnerable dependencies and third-party libraries, AI systems introduce additional complexities:
- Pre-trained model provenance
- Training data lineage and quality
- Model adaptation and fine-tuning security
Understanding and securing the entire AI supply chain requires a broader perspective than traditional dependency scanning.
Monitoring and Observability
In traditional AppSec, we monitor for known attack patterns, anomalous API calls, and unusual traffic patterns. AI systems require monitoring a constantly growing new set of attack patterns; prompt injection attempts, jailbreaks, PII leakage, code leakage, toxic/harmful content detection to name a few.
These observability requirements extend beyond traditional security monitoring, blurring the lines between security, data science, and MLOps, being termed MLSecOps.
Bridging the Gap: Practices for Securing AI Applications
1. Implement AI-Specific Security Testing
Traditional application security relies heavily on Static Application Security Testing (SAST), Software Composition Analysis (SCA), Dynamic Application Security Testing (DAST), and manual penetration testing for validation aspects of an AppSec program. There are also log monitoring frameworks for incident alerting. For AI applications, the toolset needs to expand to include:
- Model scanning: I like to think of this like the equivalent of SAST+SCA for AI products
- Automated red teaming for AI Apps: This is what I think of like a DAST for AI products
- Constantly evolving runtime threat detection: I think of this as real-time log monitoring and actioning
These testing methodologies complement traditional security testing while addressing AI-specific vulnerabilities and threats.
2. Adopt a Secure AI Development Lifecycle
Just as we have the Secure Software Development Lifecycle (SSDLC) for traditional applications, the Secure AI Development Lifecycle includes:
- Security requirements specific to AI capabilities
- Threat modeling that considers AI-specific attack vectors
- Secure training data management processes
- Model validation and verification before deployment
- Continuous security monitoring of model behavior
This framework extends traditional AppSec processes to encompass the unique aspects of AI development and use.
3. Establish Model Governance and Controls
While traditional AppSec focuses on code governance and access controls, AI security requires additional governance around models:
- Model inventories and risk classification
- Version control for models and training data
- Clear documentation of model limitations and guardrails
- Ethical use policies and enforcement mechanisms
These governance practices help mitigate risks specific to AI systems while complementing traditional security controls.
4. Implement Defense in Depth for AI Systems
The principle of defense in depth remains crucial in AI security. As a security practitioner, I will always believe multiple layers of protection are better:
- Focus on security fundamentals first
- Input sanitization and validation before reaching the model
- Runtime protection against known attack patterns
- Output filtering and validation
- Monitoring for anomalous behavior
- Regular model evaluations and updates
This multi-layered approach combines traditional security practices with AI-specific controls.
The Evolving Role of Security Engineers
As I continue in my role as a Product Security Engineer operating in this new AI world, I've come to appreciate both the continuity and evolution of security principles. Many foundational concepts transfer directly, providing a solid starting point for securing AI applications. However, the unique characteristics of AI systems demand new knowledge, tools, and approaches. You have findings and risks with your AI application that you’re building or you’re using, what now? How can these new risks be remediated or managed to enable these powerful technologies? We’re still defining all of this as we go; it’s a journey and it’s only just getting started.
For security engineers looking to consider this new threat vector I recommend:
- Leverage your existing security knowledge as a foundation
- Invest time in understanding AI-specific vulnerabilities and attack vectors
- Partner closely with data scientists and ML engineers to bridge knowledge gaps
- Adopt AI-specific security tools and methodologies while maintaining traditional security practices
- Stay current with the rapidly evolving AI security landscape
- Think outside the box, this is a totally new space with new rules
- Use community resources available to you such as MLSecOps
The intersection of application security and AI represents one of the most exciting and challenging frontiers in cybersecurity today. By bringing our traditional security expertise to bear while embracing the unique aspects of AI systems, we can help ensure these powerful technologies remain secure, reliable, and trustworthy.
Find a topic you care about
Get the best of Protect AI Blogs and News delivered to your inbox
Subscribe our newsletter for latest AI news. Let's stay updated!
Related Blogs

Security Spotlight: Embracing a Culture of Security at Protect AI

Why LLMs Are Just the Tip of the AI Security Iceberg
