OpenAI's Safety Systems team is at the forefront of ensuring AI safety and deployment, focusing on both immediate practical applications and long-term research. The Safety Reasoning Research team works on improving foundational models' ability to reason about safety, values, and cultural norms, while developing robust moderation systems and addressing critical societal challenges.
As a Research Engineer/Scientist in Safety Reasoning, you'll be pioneering innovative machine learning techniques to enhance our foundation model's safety understanding and capabilities. The role combines cutting-edge research with practical applications, working on critical initiatives such as moderation policy enforcement, democratic policy development, and safety reward modeling.
The position offers a competitive compensation package ranging from $245K to $440K, plus equity and comprehensive benefits including medical, dental, and vision insurance, mental health support, and a generous 401(k) matching program. You'll be part of a team that values diversity, safety-first approaches, and is committed to beneficial AI development.
Working at OpenAI means contributing to the development of safe, universally beneficial AGI. The role requires strong technical expertise, with 5+ years of research engineering experience and proficiency in Python. Your work will directly impact how AI systems understand and implement safety measures, making this an opportunity to shape the future of AI technology while ensuring it remains aligned with human values and safety requirements.
Join a team that's pushing the boundaries of AI capabilities while maintaining a strong focus on safety and ethical considerations. You'll collaborate with policy researchers, contribute to multimodal content analysis, and help develop robust systems for preventing harmful AI behaviors. This role offers the unique opportunity to work on both theoretical and practical aspects of AI safety, making a real difference in how AI technology is developed and deployed responsibly.