Anthropic is at the forefront of AI research, developing powerful models that have the potential to revolutionize human-machine interaction. As a Security Engineer at Anthropic, you'll play a crucial role in safeguarding our AI systems from exfiltration and misuse. Your responsibilities will include implementing secure controls for our AI training pipelines, applying security architecture patterns, and protecting our model weights as we scale capabilities.
You'll be designing and implementing secure-by-default controls for our software supply chain, AI model training systems, and deployment environments. Your work will involve conducting security architecture reviews, threat modeling, and vulnerability assessments to identify and mitigate risks. You'll also support our responsible disclosure and bug bounty programs, and participate in the Security Engineering team's on-call rotation.
As a senior member of the team, you'll mentor and coach other security engineers, contribute to company-building activities like interviewing, and help raise security awareness across the organization. You'll lead large-scale efforts such as implementing multi-party authorization for AI-critical infrastructure, reducing sensitive production access, and securing build pipelines.
The ideal candidate will have 8+ years of software development experience with a security focus, proficiency in languages like Rust, Python, and JavaScript/TypeScript, and a track record of successfully launching security initiatives. You should be passionate about making AI systems safer, more interpretable, and aligned with human values.
Join Anthropic in our mission to create beneficial AI systems that can transform society while ensuring they remain safe and secure. You'll work in a collaborative environment with a team of committed researchers, engineers, policy experts, and business leaders, all focused on advancing the field of AI in a responsible and impactful way.