Introduction
I've spent decades watching the cybersecurity landscape evolve, from the early days of viruses that slowly spread via the floppy disk “sneaker-net,” to today's ultra-fast moving, sophisticated AI-powered threats. One thing has remained constant: waiting until something goes wrong or trying to bolt security on late in the process is ineffective. Reactive security measures alone are insufficient. Building security in from the very beginning not only leads to better security outcomes for products and systems, when implemented with optimization and automation in mind it can actually speed up implementation and release cycles.
That’s why Protect AI is proud to be a signatory to CISA's Secure by Design pledge, joining a growing coalition of technology leaders committed to building security into products from the ground up.
The Cost of Reactive Security
Despite best efforts by forward thinking security experts, the reality is that in the drive to innovation, many companies adopt and build new solutions without taking security into account up front. The result is that rather than being a reinforcing thread woven deeply throughout the system, security is an afterthought that gets patched on very late in the process—often in response to incidents or vulnerabilities. This approach has led to a costly cycle of patch management, incident response, and customer anxiety. As we develop increasingly powerful AI systems, we simply cannot afford to maintain this reactive stance.
The statistics tell a sobering story. In 2024, the average cost of a data breach reached $4.88 million. As attackers adopt AI and automation in their attack toolkits, the total cost of breaches is most likely to increase. But response numbers may not capture the full picture: the lost customer trust, regulatory scrutiny, and operational disruptions that follow security incidents can have lasting impacts on businesses and their stakeholders, too.
Why Secure by Design Matters for AI Security
As an AI security company, we face unique challenges. Our system development processes are in line to process vast amounts of customer data. Our solutions are depended on by customers to provide security benchmarks and level sets. For example, our model scanning tool, Guardian, is used by customers to scan, understand, and govern the security posture of first and third party models. If Guardian fails to identify malicious code, it could affect a customer’s model integrity, decision making processes, and their fundamental trust in AI systems.
Secure by Design principles align with our vision for responsible development. By committing to these principles, we're pledging to:
- Take ownership of security outcomes for our customers
- Embrace radical transparency and accountability
- Lead from the top
Those all sound like great principles, but putting them into place takes careful planning and practice.
How Protect AI Implements Secure by Design
To make our commitment to the pledge real, we’re implementing Secure by Design principles in the following ways:
Take ownership of security outcomes for our customers
To meet this principle we take a variety of actions to ensure that the burden of security does not fall solely on the customer. While all security is a shared responsibility between customers and providers, this principle guides us to take accountability for the part of the burden that we can bear rather than pushing it onto the customer. Some of the ways we meet this principle in practice include:
- Integrating trust chain within CI/CD pipeline starting from the code repository to building, deployment and execution of workloads
- Implementing secure by default practices using Infrastructure as Code templates and Kubernetes manifests to enforce security hardening and least privilege principles
- Extensive unit, internal and third party testing to ensure that our security features behave as expected during use (and misuse)
- Creating well defined product development architectures with built-in security automation to empower engineering
- Scanning of models used in our CI/CD pipeline
Embrace radical transparency and accountability
Building secure products is important, but if customers don’t have insight into what that looks like in practice, they are not empowered to understand the risk profile of the software they are adopting. We take accountability for security via the following actions:
- Working within an industry standard secure development lifecycle (SDLC) framework
- Adhering to documented secure development guidelines
- Responsible use of open source software (OSS)
- Supporting an educated software engineering workforce that consistently shares and educates cross-functional product teams on security best practices
- Published Vulnerability Disclosure Policy
We also support transparency with an open door policy for customers to ask us about our product security standards and by using continuous monitoring and reporting tools that customers can use to check our current status at any time via our trust portal.
Lead from the top
Building with a Secure by Design mindset requires support from the very top. The security principles are Protect AI, start at the top with our C-Suite and Board of Directors. And to make sure we maintain this principle in practice we take the following actions:
- Regular security updates to the Board of Directors
- A dedicated application security team that works closely with product engineering
Beyond Compliance
While regulatory compliance is important, our commitment to Secure by Design goes beyond checking boxes. We're fundamentally rethinking how we approach security in AI development. This means:
- Treating security as a core feature, not a constraint
- Empowering developers with security tools and knowledge
- Creating feedback loops between security teams and AI researchers
- Building security considerations into our earliest design discussions
A Call to Action
The AI industry stands at a crossroads. As we develop increasingly powerful systems, our responsibility to build them securely grows exponentially. The CISA Secure by Design pledge provides a framework for meeting this responsibility, but its success depends on widespread adoption and genuine commitment from technology leaders.
Security by Design should be the foundation upon which we build the future of AI.
Together, we can create a more secure and trustworthy AI ecosystem that benefits everyone.
This pledge isn't just about protecting our systems or our company's reputation. It's about ensuring that as AI becomes more integrated into critical systems and daily life, we can trust it to operate safely and securely. That's a future worth building, and it starts with designing security in from the beginning.