OpenAI is seeking a Research Engineer/Scientist for their Safety Systems team, focusing on ensuring the safe deployment of AI models. This role combines practical projects with fundamental research in AI safety. The position involves developing innovative machine learning techniques to enhance foundation models' safety understanding and capabilities. You'll work on critical initiatives including moderation policy enforcement, democratic policy development, and safety reward modeling.
The Safety Reasoning Research team operates at the intersection of immediate practical needs and long-term research goals. Key responsibilities include improving AI models' ability to reason about safety, values, and cultural norms, developing moderation systems, and addressing crucial societal challenges like election misinformation. The role requires expertise in Python programming and AI safety concepts, with a focus on RLHF, adversarial training, and fairness.
Working at OpenAI offers competitive compensation ($245K-$440K) plus equity, comprehensive benefits including medical/dental/vision insurance, mental health support, and generous parental leave. The position is based in San Francisco, where you'll join a team dedicated to ensuring AI benefits humanity. This is an opportunity to shape the future of safe AI development while working with cutting-edge technology and contributing to OpenAI's mission of building beneficial AGI.
The ideal candidate will have 5+ years of research engineering experience, strong programming skills, and a deep commitment to AI safety. You'll collaborate with policy researchers, develop automated systems, and help create robust safety measures for AI deployment. OpenAI values diversity and maintains an inclusive work environment, offering reasonable accommodations and equal opportunities to all qualified candidates.