Research Engineer, Safety Reasoning

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
$245,000 - $440,000
Machine Learning
Senior Software Engineer
In-Person
1,000 - 5,000 Employees
5+ years of experience
AI

Description For Research Engineer, Safety Reasoning

The Safety Reasoning Research team at OpenAI is seeking a Research Engineer to develop innovative machine learning techniques that enhance the safety understanding and capability of foundation models. This role involves defining and developing impactful safety tasks, improving moderation models, and contributing to policy development. Key responsibilities include:

  • Conducting applied research to improve foundational models' ability to reason about human values, ethics, and cultural norms.
  • Developing and refining AI moderation models to detect and mitigate AI misuse and abuse.
  • Collaborating with policy researchers to iterate on content policies.
  • Contributing to multimodal content analysis research.
  • Developing pipelines for automated data labeling, model training, and deployment.
  • Designing effective red-teaming pipelines to examine system robustness.

The ideal candidate will have 5+ years of research engineering experience, proficiency in Python, and a strong background in AI safety. Experience with large-scale AI systems and multimodal datasets is a plus. This role offers an opportunity to work at the forefront of AI safety, contributing to OpenAI's mission of building safe, universally beneficial AGI.

OpenAI provides a diverse and inclusive work environment, offering equal opportunities and considering accommodations for applicants with disabilities. The company is committed to pushing the boundaries of AI capabilities while prioritizing safety and human needs.

Join OpenAI in shaping the future of technology and ensuring that the benefits of AI are widely shared.

Last updated 3 months ago

Responsibilities For Research Engineer, Safety Reasoning

  • Conduct applied research to improve foundational models' reasoning about human values, ethics, and cultural norms
  • Develop and refine AI moderation models
  • Collaborate with policy researchers on content policies
  • Contribute to multimodal content analysis research
  • Develop pipelines for automated data labeling, model training, and deployment
  • Design effective red-teaming pipelines

Requirements For Research Engineer, Safety Reasoning

Python
  • 5+ years of research engineering experience
  • Proficiency in Python or similar languages
  • Experience with large-scale AI systems and multimodal datasets (a plus)
  • Expertise in AI safety topics (RLHF, adversarial training, robustness, fairness & biases)
  • Enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models
  • Alignment with OpenAI's mission and charter

Benefits For Research Engineer, Safety Reasoning

Equity
  • Equity

Interested in this job?

Jobs Related To OpenAI Research Engineer, Safety Reasoning

Research Engineer, Multimodal

Senior Research Engineer position at OpenAI focusing on multimodal AI safety, evaluation, and development, offering competitive compensation and comprehensive benefits.

Research Engineer / Research Scientist, Post-Training

Senior ML research position at OpenAI focusing on training and improving AI models for ChatGPT and API deployment, offering competitive compensation and comprehensive benefits.

Researcher (Engineer/Scientist), Training Architecture

OpenAI seeks a Researcher for Training Architecture to enhance LLM capabilities, offering $360K-$440K salary, equity, and comprehensive benefits in San Francisco.

Research Engineer, Pre-training Architecture

OpenAI is seeking a Research Engineer for Pre-training Architecture to advance neural network architecture for language models.

Research Engineer, Post-training Model Capability

OpenAI seeks a Research Engineer for post-training model capability, offering $360K+ and equity. Shape the future of AI at a leading research company.