The Safety Reasoning Research team at OpenAI is seeking a Research Engineer to develop innovative machine learning techniques that enhance the safety understanding and capability of foundation models. This role involves defining and developing impactful safety tasks, improving moderation models, and contributing to policy development. Key responsibilities include:
The ideal candidate will have 5+ years of research engineering experience, proficiency in Python, and a strong background in AI safety. Experience with large-scale AI systems and multimodal datasets is a plus. This role offers an opportunity to work at the forefront of AI safety, contributing to OpenAI's mission of building safe, universally beneficial AGI.
OpenAI provides a diverse and inclusive work environment, offering equal opportunities and considering accommodations for applicants with disabilities. The company is committed to pushing the boundaries of AI capabilities while prioritizing safety and human needs.
Join OpenAI in shaping the future of technology and ensuring that the benefits of AI are widely shared.