engage
Visit our Booth
Hands on demos, insightful lightning talks, interactive community engagement — the Protect AI booth
(S-1549) is your one-stop shop for AI and ML Security.
Product Demos
Join our team of experts to experience first hand how our products help you secure your AI, including, AI Model Security, LLM Runtime Security, Automated Generative AI Red Teaming, and more.
Lightning Talks
From AI Security best practices to real life AI threats, learn more about the evolving landscape of AI Security from seasoned AI security researchers, industry experts, and community members.
Fun Swag
Engage with us and learn about securing your AI investments, for the chance to take home cool swag you'll actually want to use!
ON THE AGENDA at RSAC
Join Our Experts for Essential Sessions on AI Security
From foundational principles to advanced threat detection, Protect AI's experts will guide you through the evolving landscape of AI security challenges and solutions during the conference.
Principles of GenAI Security: Foundations for Building Security In
Security Foundations Session, Presented by Diana Kelley
Monday, April 28th at 9:40 AM
Shadow AI: Shining the Governance Light on AI
Intersection of AI & Security Panel, Moderated by Diana Kelley
Monday, April 28th at 1:10 PM
Threat Modeling in the Age of AI
Threat Modeling Connect Meet-up with panelist Diana Kelley
Tuesday, April 29th, 2 PM - 4 PM
Unmasking Hidden Threats in the World's Largest AI Hub
Intersection of AI & Security Session, presented by Will Armiros and Sam Washko
Wednesday, April 30th at 1:15 PM
Securing Retrieval Pipelines in Large Language Models
Security Strategy and Architecture Session, presented by Oleksandr Yaremchuk and Neal Swaelens
Tuesday, April 29th at 2:25 PM
Meet our Speakers

Diana Kelley
CISO
Diana Kelley is the Chief Information Security Officer (CISO) for Protect AI. She also serves on the Boards of WiCyS, The Executive Women’s Forum (EWF), InfoSec World, TechTarget Security Editorial, and DevNet AI/ML. Diana was Cybersecurity Field CTO for Microsoft, Global Executive Security Advisor at IBM Security, GM at Symantec, VP at Burton Group (now Gartner), and a Manager at KPMG. Her extensive volunteer work has included serving on the ACM Ethics & Plagiarism Committee, and Cybersecurity Committee Advisor at CompTIA. She is a sought-after keynote speaker, the host of BrightTALK’s The Security Balancing Act, Co-Author of the books Practical Cybersecurity Architecture and Cryptographic Libraries for Developers, the EWF 2020 Executive of the Year and EWF Conference Chair 2021-Present.

Will Armiros
Tech Lead Manager
William Armiros is a Tech Lead Manager at Protect AI, where he is building systems to help ML engineers and data scientists introduce security into their MLOps workflows effortlessly. Previously, he led a team at AWS working on application observability and distributed tracing. During that time, he contributed to the industry-wide OpenTelemetry standard and helped lead the effort to release an AWS-supported distribution of it. He is passionate about making the observability and security of AI-enabled applications as seamless as possible.

Sam Washko
Senior Software Engineer
Sam Washko, Senior Software Engineer at Protect AI, specializes in developing tools to enhance the security of AI and machine learning systems. She led development of ModelScan, a scanner to detect attacks in ML model files. Her work on ModelScan includes the public open source tool and a more advanced version powering the core of Guardian, Protect AI’s flagship model scanner used as an official scanner for all models on Hugging Face. She holds a BS in Computer Science from Duke University, and prior to joining Protect AI, she was part of the Azure SQL Security Feature Team and Blue Team at Microsoft, designing cryptography and telemetry systems. She has a passion for connecting theory and problem solving with engineering to produce solutions that make computing more secure for everyone.

Oleksandr Yaremchuk
Head of Engineering
With over 14 years of experience as a hands-on engineer and leader, Oleksandr Yaremchuk brings deep expertise in building and securing complex systems. Over the past year, he focused on large language model (LLM) security, starting with the development of the leading open source LLM firewall, LLM Guard. This project was acquired by Protect AI, where he now leads an LLM Security team. His team focuses on creating scalable solutions to address the emerging challenges of LLM security.

Neal Swaelens
Head of Product, LLM Security
Neal Swaelens is the Head of Product, LLM Security at Protect AI. Neal is an expert in securing large language models (LLMs), machine learning, and AI. Previously, he was an early-stage startup investor, and Founded Laiyer AI, a leading open source LLM security product that was acquired by Protect AI. Laiyer AI's threat detection models for large language models have been downloaded more than five million times.
Follow our Communities
AI Security Education
MLSecOps
Data scientists, ML and AppSec professionals, Regulators, and Business Leaders can learn best practices in MLSecOps, listen to podcasts with thought leaders, and connect with our thriving Slack community.
AI Threat Research
Huntr
The World's first AI Bug Bounty Platform, huntr provides a single place for security researchers to submit vulnerabilities, to ensure the security and stability of AI applications. The Huntr community is the place for you to start your journey into AI threat research.