Anthropic is at the forefront of creating reliable, interpretable, and steerable AI systems, with a mission to ensure AI remains safe and beneficial for society. As a Security Engineer, you'll join their Security Engineering team, playing a crucial role in safeguarding their AI systems and maintaining user trust. The position involves building security for large-scale AI clusters, implementing robust cloud security architectures, and designing secure-by-design workflows.
The role requires a seasoned professional with 7+ years of software engineering experience and a strong background in implementing critical systems at scale. You'll work with cutting-edge technologies, including cloud platforms (AWS/GCP) and Kubernetes, while writing secure Python code. The position offers an opportunity to shape security practices in the emerging field of AI development.
Anthropic operates as a public benefit corporation, emphasizing collaborative research and development. They value impact over conventional metrics, treating AI research as an empirical science. The company offers competitive compensation ($300,000-$320,000), generous benefits, and a hybrid work environment in San Francisco.
This role is ideal for someone passionate about AI safety and alignment, with strong technical expertise in security engineering. You'll have the chance to work on meaningful problems at the intersection of AI and security, contributing to the development of trustworthy AI systems while ensuring robust security measures are in place.