OpenAI's Safety Systems team is seeking a Research Engineer for their Preparedness team, focusing on ensuring the safe deployment of frontier AI models. This role is at the forefront of OpenAI's mission to build and deploy safe AGI. The position involves working with cutting-edge AI technology to identify, track, and prepare for catastrophic risks related to frontier AI models.
The role combines technical expertise in machine learning with a strong focus on safety and risk management. You'll be working in a dynamic environment where you'll help shape the empirical understanding of AI safety concerns and own individual projects end-to-end. The position offers competitive compensation ($245K-$440K) plus equity and comprehensive benefits.
The ideal candidate will have strong ML research engineering experience, excellent problem-solving abilities, and a deep understanding of AI safety risks. You'll be part of a team responsible for monitoring and predicting AI system capabilities, developing evaluation frameworks, and establishing safety procedures for powerful AI systems.
This is an opportunity to directly impact the safe development of AGI while working with world-class researchers and engineers. The role offers significant growth potential and the chance to contribute to crucial safety initiatives in AI development. Benefits include comprehensive healthcare, mental health support, generous parental leave, and professional development opportunities.