OpenAI's Safety Systems team is seeking a Research Engineer for their Preparedness initiative, focusing on Biology/CBRN aspects. This role is crucial for ensuring the safe deployment of frontier AI models and preparing for potential catastrophic risks. The position offers a competitive salary range of $245K-$440K plus equity and comprehensive benefits.
The Preparedness team's mission is twofold: monitoring and predicting frontier AI systems' capabilities with focus on catastrophic misuse risks, and developing concrete procedures and infrastructure for safe AI development. As a Research Engineer, you'll be at the forefront of identifying and evaluating AI safety risks, building scalable evaluation systems, and establishing best practices for AI safety.
The ideal candidate combines technical expertise in machine learning with a deep understanding of AI safety concerns. You'll need experience in ML research engineering, observability, and monitoring, plus the ability to work effectively in a fast-paced environment. The role requires a "red-teaming mindset" and excellent cross-functional collaboration skills.
This position offers an opportunity to directly impact the safe development of AGI while working with cutting-edge technology. You'll join a team dedicated to ensuring AI benefits humanity, with access to comprehensive benefits including medical insurance, mental health support, generous parental leave, and professional development opportunities. The role requires U.S. person status due to export control requirements.
OpenAI provides an inclusive environment committed to diversity and equal opportunity, making it an ideal workplace for those passionate about responsible AI development and its societal implications.