Research Engineer / Scientist, Safety Reasoning

AI research and deployment company dedicated to ensuring general-purpose artificial intelligence benefits all of humanity.
$245,000 - $440,000
Machine Learning
Senior Software Engineer
In-Person
5+ years of experience
AI

Description For Research Engineer / Scientist, Safety Reasoning

OpenAI is seeking a Research Engineer/Scientist for their Safety Systems team, focusing on ensuring the safe deployment of AI models. This role combines practical projects with fundamental research in AI safety. The position involves developing innovative machine learning techniques to enhance foundation models' safety understanding and capabilities. You'll work on critical initiatives including moderation policy enforcement, democratic policy development, and safety reward modeling.

The Safety Reasoning Research team operates at the intersection of immediate practical needs and long-term research goals. Key responsibilities include improving AI models' ability to reason about safety, values, and cultural norms, developing moderation systems, and addressing crucial societal challenges like election misinformation. The role requires expertise in Python programming and AI safety concepts, with a focus on RLHF, adversarial training, and fairness.

Working at OpenAI offers competitive compensation ($245K-$440K) plus equity, comprehensive benefits including medical/dental/vision insurance, mental health support, and generous parental leave. The position is based in San Francisco, where you'll join a team dedicated to ensuring AI benefits humanity. This is an opportunity to shape the future of safe AI development while working with cutting-edge technology and contributing to OpenAI's mission of building beneficial AGI.

The ideal candidate will have 5+ years of research engineering experience, strong programming skills, and a deep commitment to AI safety. You'll collaborate with policy researchers, develop automated systems, and help create robust safety measures for AI deployment. OpenAI values diversity and maintains an inclusive work environment, offering reasonable accommodations and equal opportunities to all qualified candidates.

Last updated 34 minutes ago

Responsibilities For Research Engineer / Scientist, Safety Reasoning

  • Conduct applied research to improve AI models' reasoning about human values, morals, ethics, and cultural norms
  • Develop and refine AI moderation models
  • Work with policy researchers on content policies
  • Contribute to multimodal content analysis research
  • Develop and improve pipelines for automated data labeling and augmentation
  • Design and implement red-teaming pipeline for harm prevention systems

Requirements For Research Engineer / Scientist, Safety Reasoning

Python
  • 5+ years of research engineering experience
  • Proficiency in Python or similar languages
  • Experience with large-scale AI systems and multimodal datasets (preferred)
  • Proficiency in AI safety (RLHF, adversarial training, robustness, fairness & biases)
  • Enthusiasm for AI safety
  • Alignment with OpenAI's mission and charter

Benefits For Research Engineer / Scientist, Safety Reasoning

Medical Insurance
Dental Insurance
Vision Insurance
Mental Health Assistance
401k
Parental Leave
Education Budget
  • Medical, dental, and vision insurance for you and your family
  • Mental health and wellness support
  • 401(k) plan with 50% matching
  • Generous time off and company holidays
  • 24 weeks paid birth-parent leave & 20-week paid parental leave
  • Annual learning & development stipend ($1,500 per year)
  • Equity

Interested in this job?

Jobs Related To OpenAI Research Engineer / Scientist, Safety Reasoning

Research Engineer, Preparedness

Senior Research Engineer position at OpenAI focusing on AI safety and preparedness, working on frontier AI model evaluation and risk management.

Research Engineer / Scientist, Model Fusion

Senior ML Research Engineer position at OpenAI, focusing on model fusion and deployment for ChatGPT and API services, offering competitive compensation and the opportunity to shape the future of AI.

Research Engineer / Research Scientist, Computer-Using Agent

Senior research position at OpenAI focusing on developing computer-using agents, combining cutting-edge AI research with practical engineering implementation.

Research Engineer / Research Scientist, Post-Training, Frontier Product Research

Senior AI research position at OpenAI focusing on training advanced models and developing novel research methods for next-generation AI products.

Research Engineer, Multimodal Safety

Senior Research Engineer role at OpenAI focusing on multimodal safety, developing innovative techniques for AI model safety and compliance.