Senior Security Engineer
Protect AI is seeking an experienced Senior Security Engineer to join our team and lead the development and implementation of security measures for our organization's AI (artificial intelligence) applications and systems. The successful candidate will be responsible for identifying and mitigating security risks, developing security policies and procedures, and helping to develop core products.
Remote Allowed: Yes
Locations: Seattle, Dallas/Austin, Raleigh
Key Responsibilities:
- Identify and assess security risks associated with artificial intelligence applications and systems
- Develop and implement security measures to protect against identified risks with our product team
- Create and maintain internal security policies and procedures
- Stay current with the latest trends and developments in artificial intelligence and cybersecurity
- Provide guidance and training to the team on security best practices
- Lead incident response efforts in case of a security breach
Required Technical Capabilities:
- Strong understanding of artificial intelligence concepts and technologies
- Experience with machine learning, deep learning, and neural networks
- Strong understanding of security principles and best practices
- Experience with security tools such as vulnerability scanners, intrusion detection systems, and firewalls
- Experience with programming languages such as Python, JavaScript, or C/C++
- Understanding of the regulations and compliance related to artificial intelligence and security
Education:
- Bachelor's degree in computer science, security, or a related field
- Relevant certifications such as the CISSP, CISM, or CRISC are a plus
Experience:
- Minimum of 4 years experience in a cybersecurity or related role
- Experience in securing artificial intelligence applications and systems is a plus
As the artificial intelligence field is rapidly growing, the security of these systems is becoming increasingly important. The Senior Security Engineer will play a key role in ensuring that the organization's AI assets, applications, and systems are protected against security risks, and that the organization is in compliance with regulations and industry standards.
About Protect AI: Building a Safer AI Powered World
Why We Exist: AI is a critical code base in everything from autonomous vehicles, and malware detectors, to zero-carbon emission control systems. The AI code base is only increasing. With any digital transformation such as mobile web, IOT, Web3, and others, new types of cybersecurity vulnerabilities emerge. The key to mitigation is achieved through visibility, insights, and actions that help cybersecurity teams and developers understand threats, spot gaps, and harden systems. As AI systems expand in capabilities, code bases, and data sources used by development teams, opportunities for security incidents have increased and new vulnerabilities have been created in the enterprise ML supply chain.
Who We Are: Our founders all have nearly two decades in leading some of the largest businesses in data science, AI, and ML. We’ve helped hundreds of companies scale their ML Operations, commonly known as MLOps. This scale and scope gave us insights into the unique security needs of ML environments and an understanding of what it takes to blend tools, talent, and processes to better secure AI systems. We call that ML Security Operations, or MLSecOps.
What We Do: ML and security teams need new tools and processes to better secure their AI systems. Just as DevSecOps incorporates security directly into DevOps workflows, tools, and processes, Protect AI is building the same thing for MLSecOps. We are embedding security deeply into the MLOps workflows and tools for a more secure and comprehensive approach. Our first and free offering, NB Defense (Notebook Defense), provides data scientists a seamless way to scan their Jupyter Notebooks for common vulnerabilities, dependency checks, third-party licenses, personally identifiable information leakage, and API secrets exposure.
Protect AI’s solutions for MLSecOps provide security professionals, ML practitioners, and risk officers with a common cybersecurity landscape, along with clear context across AI systems and ML code bases. Our offerings provide a comprehensive security test suite for the unique vulnerabilities found in AI and ML systems, and provide clear remediations so enterprises can deliver more secure ML models and AI systems.
Who Backs Us: The company’s $13.5M seed funding was co-led by some of the biggest names in Cybersecurity and AI day-one companies, including Acrew Capital, boldstart ventures, and Pelion Ventures. Additional investors include CEOs and founders of leading cybersecurity companies running some of the best companies in shift left, cloud security, and data protection domains. We are well-funded and positioned to weather the shifting economic landscape. Our initial build partners include category leaders from key technology suppliers, energy, entertainment, finance, healthcare, software, and national security.
What We Offer: Our market-competitive compensation package, strong benefits, and culture are among those you would find in the biggest technology companies. But, you have the ability to build this company with other like minded people, who see the opportunity and vision to build a safer AI powered world. Learn more about our benefits, here.
Equal Employment Opportunity:
Protect AI provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.