Anthropic is seeking a Research Engineer for their Frontier Red Team to develop and implement "gold standard" evaluations for catastrophic risks in AI systems. This role is crucial for implementing the company's Responsible Scaling Policy (RSP) and ensuring the safe deployment of frontier AI models. The position involves creating evaluation systems for some of the most capable AI systems ever built, collaborating across multiple domains including biosecurity, cybersecurity, and national security. The ideal candidate will combine strong engineering skills with a dedication to AI safety, working to build and scale novel evaluation infrastructure that could become industry standards. The role offers competitive compensation ($280,000-$340,000), hybrid work arrangements in San Francisco, and comprehensive benefits. Anthropic operates as a public benefit corporation, focusing on big science approaches to AI research with a collaborative, impact-driven culture. The company values diverse perspectives and encourages applications from candidates who might not meet every qualification but are passionate about contributing to safe and beneficial AI development.