Research Engineer / Scientist, Safety Reasoning

AI research and deployment company dedicated to ensuring general-purpose artificial intelligence benefits all of humanity.
$245,000 - $440,000
Machine Learning
Staff Software Engineer
In-Person
5+ years of experience
AI

Description For Research Engineer / Scientist, Safety Reasoning

OpenAI's Safety Systems team is at the forefront of ensuring AI safety and deployment, focusing on both immediate practical applications and long-term research. The Safety Reasoning Research team works on improving foundational models' ability to reason about safety, values, and cultural norms, while developing robust moderation systems and addressing critical societal challenges.

As a Research Engineer/Scientist in Safety Reasoning, you'll be pioneering innovative machine learning techniques to enhance our foundation model's safety understanding and capabilities. The role combines cutting-edge research with practical applications, working on critical initiatives such as moderation policy enforcement, democratic policy development, and safety reward modeling.

The position offers a competitive compensation package ranging from $245K to $440K, plus equity and comprehensive benefits including medical, dental, and vision insurance, mental health support, and a generous 401(k) matching program. You'll be part of a team that values diversity, safety-first approaches, and is committed to beneficial AI development.

Working at OpenAI means contributing to the development of safe, universally beneficial AGI. The role requires strong technical expertise, with 5+ years of research engineering experience and proficiency in Python. Your work will directly impact how AI systems understand and implement safety measures, making this an opportunity to shape the future of AI technology while ensuring it remains aligned with human values and safety requirements.

Join a team that's pushing the boundaries of AI capabilities while maintaining a strong focus on safety and ethical considerations. You'll collaborate with policy researchers, contribute to multimodal content analysis, and help develop robust systems for preventing harmful AI behaviors. This role offers the unique opportunity to work on both theoretical and practical aspects of AI safety, making a real difference in how AI technology is developed and deployed responsibly.

Last updated 9 minutes ago

Responsibilities For Research Engineer / Scientist, Safety Reasoning

  • Conduct applied research to improve foundational models' reasoning about human values, morals, ethics, and cultural norms
  • Develop and refine AI moderation models to detect and mitigate AI misuse and abuse
  • Work with policy researchers to adapt and iterate on content policies
  • Contribute to research on multimodal content analysis
  • Develop and improve pipelines for automated data labeling and augmentation
  • Design and experiment with effective red-teaming pipeline

Requirements For Research Engineer / Scientist, Safety Reasoning

Python
  • 5+ years of research engineering experience
  • Proficiency in Python or similar languages
  • Experience with large-scale AI systems and multimodal datasets (preferred)
  • Proficiency in AI safety (RLHF, adversarial training, robustness, fairness & biases)
  • Enthusiasm for AI safety and dedication to enhancing AI model safety
  • Alignment with OpenAI's mission and charter

Benefits For Research Engineer / Scientist, Safety Reasoning

Medical Insurance
Dental Insurance
Vision Insurance
Mental Health Assistance
401k
Parental Leave
Education Budget
  • Medical, dental, and vision insurance for you and your family
  • Mental health and wellness support
  • 401(k) plan with 50% matching
  • Unlimited time off and 13 company holidays per year
  • Paid parental leave (24 weeks paid birth-parent leave & 20-week paid parental leave)
  • Annual learning & development stipend ($1,500 per year)

Interested in this job?

Jobs Related To OpenAI Research Engineer / Scientist, Safety Reasoning

Post-training - Model Fusion Research Engineer

OpenAI seeks a Post-training Model Fusion Research Engineer to enhance ChatGPT's capabilities and lead deployment improvements.

Distributed Training Engineer, Sora

Join OpenAI as a Distributed Training Engineer for Sora, working on cutting-edge video AI models and optimizing training frameworks.

Research Scientist 4 - Content and Studio

Senior Research Scientist role at Netflix focusing on computer vision and machine learning for content promotion and studio operations.

Senior Staff Machine Learning Engineer, Relevance

Senior Staff Machine Learning Engineer position at Airbnb, focusing on search relevance and personalization, requiring 12+ years of ML experience.

Staff Software Engineer, AI for Dev Productivity

Lead AI initiatives for developer productivity at Airbnb as a Staff Software Engineer, focusing on AI-based tooling and infrastructure improvements.