OpenAI, a leading AI research and deployment company, is seeking a Research Engineer specializing in AI Security & Privacy to join their Safety Systems team in San Francisco. This role offers a unique opportunity to shape the future of AI security and privacy, working on cutting-edge challenges specific to large language models.
As a Research Engineer, you'll be at the forefront of developing innovative methodologies and implementing systems to reduce risks associated with AI security and privacy during model deployment. You'll tackle emergent challenges such as model inversion prevention, knowledge unlearning, anti-regurgitation, fine-tuning safety, and protection against data poisoning.
The ideal candidate should have a Ph.D. or advanced degree in computer science, AI, or a related field, with at least 3 years of experience in AI security and privacy research for deep learning models. Strong programming skills, particularly in Python and frameworks like PyTorch or TensorFlow, are essential. You should be goal-oriented, adaptable, and thrive in a collaborative environment.
OpenAI offers a competitive salary range of $295,000 to $440,000, along with generous equity and benefits. These include comprehensive health insurance, mental health support, a 401(k) plan with 50% matching, unlimited time off, paid parental leave, and an annual learning stipend.
Join OpenAI in their mission to ensure that general-purpose artificial intelligence benefits all of humanity. This role provides an exceptional opportunity to work on groundbreaking AI technology while addressing critical security and privacy concerns in the rapidly evolving field of artificial intelligence.