Research Engineer, Safety Reasoning

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
$245,000 - $440,000
Machine Learning
Senior Software Engineer
In-Person
1,000 - 5,000 Employees
5+ years of experience
AI

Description For Research Engineer, Safety Reasoning

The Safety Reasoning Research team at OpenAI is seeking a Research Engineer to develop innovative machine learning techniques that enhance the safety understanding and capability of foundation models. This role involves defining and developing impactful safety tasks, improving moderation models, and contributing to policy development. Key responsibilities include:

  • Conducting applied research to improve foundational models' ability to reason about human values, ethics, and cultural norms.
  • Developing and refining AI moderation models to detect and mitigate AI misuse and abuse.
  • Collaborating with policy researchers to iterate on content policies.
  • Contributing to multimodal content analysis research.
  • Developing pipelines for automated data labeling, model training, and deployment.
  • Designing effective red-teaming pipelines to examine system robustness.

The ideal candidate will have 5+ years of research engineering experience, proficiency in Python, and a strong background in AI safety. Experience with large-scale AI systems and multimodal datasets is a plus. This role offers an opportunity to work at the forefront of AI safety, contributing to OpenAI's mission of building safe, universally beneficial AGI.

OpenAI provides a diverse and inclusive work environment, offering equal opportunities and considering accommodations for applicants with disabilities. The company is committed to pushing the boundaries of AI capabilities while prioritizing safety and human needs.

Join OpenAI in shaping the future of technology and ensuring that the benefits of AI are widely shared.

Last updated a month ago

Responsibilities For Research Engineer, Safety Reasoning

  • Conduct applied research to improve foundational models' reasoning about human values, ethics, and cultural norms
  • Develop and refine AI moderation models
  • Collaborate with policy researchers on content policies
  • Contribute to multimodal content analysis research
  • Develop pipelines for automated data labeling, model training, and deployment
  • Design effective red-teaming pipelines

Requirements For Research Engineer, Safety Reasoning

Python
  • 5+ years of research engineering experience
  • Proficiency in Python or similar languages
  • Experience with large-scale AI systems and multimodal datasets (a plus)
  • Expertise in AI safety topics (RLHF, adversarial training, robustness, fairness & biases)
  • Enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models
  • Alignment with OpenAI's mission and charter

Benefits For Research Engineer, Safety Reasoning

Equity
  • Equity

Interested in this job?

Jobs Related To OpenAI Research Engineer, Safety Reasoning

Senior Software Engineer, AGI Automations

Senior Software Engineer role leading Amazon's AGI team in developing cutting-edge generative AI technologies and multimodal foundation models.

ML Research Engineer, Apple Services Engineering - Search Science

Senior ML Research Engineer position at Apple Services Engineering, focusing on search science and information retrieval across Apple's major platforms.

AIML - Software Engineer - SII, Spotlight

Senior Software Engineer position at Apple working on AI/ML systems for Spotlight search, focusing on privacy-aware search experiences across Apple devices.

Senior Software Engineer, AGI Automations

Lead software engineering role at Amazon's AGI team, focusing on generative AI and multimodal foundation models development.

Computer Vision Research Engineer - Apple Maps 3D Vision Team

Senior Computer Vision Research Engineer position at Apple Maps, focusing on 3D vision and machine learning, offering competitive salary and benefits.