OpenAI is seeking a Research Engineer for their Preparedness team within the Safety Systems division. This role is crucial in ensuring the safe deployment of frontier AI models and preparing for increasingly capable AI systems. The position combines technical expertise in machine learning with a focus on AI safety and risk assessment.
The role sits within the Safety Systems team, which is at the forefront of OpenAI's mission to build and deploy safe AGI. The Preparedness team specifically focuses on identifying, tracking, and preparing for catastrophic risks related to frontier AI models. Their work involves monitoring AI capabilities, assessing potential misuse risks, and developing concrete procedures and infrastructure to ensure safe handling of powerful AI systems.
As a Research Engineer, you'll be responsible for pushing the boundaries of frontier models while maintaining a strong focus on safety. The role requires a unique combination of technical ML engineering skills and a deep understanding of AI safety concerns. You'll work on building and refining evaluations of AI models, designing scalable systems, and contributing to best practices in AI safety.
The ideal candidate brings strong experience in machine learning engineering, with particular emphasis on ML observability and monitoring. You should be comfortable in a fast-paced research environment and capable of managing end-to-end projects. Additional valuable skills include experience in red-teaming systems, understanding of societal impacts of AI deployment, and excellent cross-functional communication abilities.
OpenAI offers competitive compensation ($200K – $370K plus equity) and the opportunity to work on cutting-edge AI technology while ensuring it benefits humanity. The company values diversity and maintains a strong commitment to equal opportunity employment. This role represents a unique opportunity to shape the future of AI safety and contribute to OpenAI's mission of ensuring general-purpose artificial intelligence benefits all of humanity.