Research Engineer / Scientist, Safety Reasoning

AI research and deployment company dedicated to ensuring general-purpose artificial intelligence benefits all of humanity.
$245,000 - $440,000
Machine Learning
Senior Software Engineer
In-Person
5+ years of experience
AI

Description For Research Engineer / Scientist, Safety Reasoning

OpenAI is seeking a Research Engineer/Scientist for their Safety Systems team, focusing on ensuring the safe deployment of AI models. This role combines practical projects with fundamental research in AI safety. The position involves developing innovative machine learning techniques to enhance foundation models' safety understanding and capabilities. You'll work on critical initiatives including moderation policy enforcement, democratic policy development, and safety reward modeling.

The Safety Reasoning Research team operates at the intersection of immediate practical needs and long-term research goals. Key responsibilities include improving AI models' ability to reason about safety, values, and cultural norms, developing moderation systems, and addressing crucial societal challenges like election misinformation. The role requires expertise in Python programming and AI safety concepts, with a focus on RLHF, adversarial training, and fairness.

Working at OpenAI offers competitive compensation ($245K-$440K) plus equity, comprehensive benefits including medical/dental/vision insurance, mental health support, and generous parental leave. The position is based in San Francisco, where you'll join a team dedicated to ensuring AI benefits humanity. This is an opportunity to shape the future of safe AI development while working with cutting-edge technology and contributing to OpenAI's mission of building beneficial AGI.

The ideal candidate will have 5+ years of research engineering experience, strong programming skills, and a deep commitment to AI safety. You'll collaborate with policy researchers, develop automated systems, and help create robust safety measures for AI deployment. OpenAI values diversity and maintains an inclusive work environment, offering reasonable accommodations and equal opportunities to all qualified candidates.

Last updated a month ago

Responsibilities For Research Engineer / Scientist, Safety Reasoning

  • Conduct applied research to improve AI models' reasoning about human values, morals, ethics, and cultural norms
  • Develop and refine AI moderation models
  • Work with policy researchers on content policies
  • Contribute to multimodal content analysis research
  • Develop and improve pipelines for automated data labeling and augmentation
  • Design and implement red-teaming pipeline for harm prevention systems

Requirements For Research Engineer / Scientist, Safety Reasoning

Python
  • 5+ years of research engineering experience
  • Proficiency in Python or similar languages
  • Experience with large-scale AI systems and multimodal datasets (preferred)
  • Proficiency in AI safety (RLHF, adversarial training, robustness, fairness & biases)
  • Enthusiasm for AI safety
  • Alignment with OpenAI's mission and charter

Benefits For Research Engineer / Scientist, Safety Reasoning

Medical Insurance
Dental Insurance
Vision Insurance
Mental Health Assistance
401k
Parental Leave
Education Budget
  • Medical, dental, and vision insurance for you and your family
  • Mental health and wellness support
  • 401(k) plan with 50% matching
  • Generous time off and company holidays
  • 24 weeks paid birth-parent leave & 20-week paid parental leave
  • Annual learning & development stipend ($1,500 per year)
  • Equity

Interested in this job?

Jobs Related To OpenAI Research Engineer / Scientist, Safety Reasoning

Research Engineer, Human-Centered AI

Senior Research Engineer position at OpenAI focusing on Human-Centered AI development, combining ML expertise with human-machine interaction research.

Research Engineer/Scientist, Personality and Model Behavior

Senior research position at OpenAI focusing on developing and implementing AI personality and behavior models for ChatGPT and related products.

Research Engineer / Research Scientist - Deep Research

Senior ML Research Engineer/Scientist position at OpenAI, focusing on Deep Research and agentic AI development, offering competitive compensation and comprehensive benefits.

Research Engineer / Research Scientist - Agent Safety - Computer Using Agent

Senior AI safety research position at OpenAI focusing on developing safe and reliable computer-using agents, offering competitive compensation and comprehensive benefits.

Research Engineer, Preparedness (Biology/CBRN)

Senior Research Engineer position at OpenAI focusing on AI safety and preparedness, offering competitive compensation and comprehensive benefits.