Research Engineer - AI Safety

AI research company working on advancing artificial intelligence for widespread public benefit and scientific discovery
Machine Learning
Mid-Level Software Engineer
Contact Company
1+ year of experience
AI

Description For Research Engineer - AI Safety

Google DeepMind is at the forefront of artificial intelligence research, working to advance AI for widespread public benefit and scientific discovery. As a Research Engineer focusing on AI Safety, you'll join a team of scientists, engineers, and ML experts working on making Gemini pre-trained models safer and more powerful. The role combines research innovation with practical implementation, focusing on improving model safety, fairness, and effectiveness.

You'll be working directly with Gemini models, conducting crucial research in pre-training phases to enhance safety features while maintaining model capabilities. This position offers a unique opportunity to impact the development of safe and socially beneficial AI systems, working alongside leading experts in the field.

The ideal candidate brings both technical expertise in machine learning and a commitment to ethical AI development. You'll need strong programming skills, experience with deep learning frameworks, and the ability to work on complex technical challenges. The role offers the chance to contribute to cutting-edge AI safety research while working on one of the most advanced AI systems in the world.

This position is perfect for someone who combines technical excellence with a passion for ensuring AI systems are developed responsibly and ethically. You'll be part of a diverse team committed to creating extraordinary impact, with opportunities to publish research and collaborate on critical challenges in AI safety.

Last updated 2 days ago

Responsibilities For Research Engineer - AI Safety

  • Conduct research and experimentation in Gemini pre-training for safer models
  • Design and maintain evaluation protocols for model behavior assessment
  • Explore data, reasoning and algorithmic solutions for safe and helpful Gemini Models
  • Write high-quality code and infrastructure for fast experimentation
  • Drive innovation and enhance understanding of Safety in Pre-training at scale

Requirements For Research Engineer - AI Safety

Python
  • Masters' level experience in machine learning, or practical ML experience
  • At least one year of experience with deep learning and/or foundation models
  • Proficiency in ML/scientific libraries and distributed computation
  • Experience in building codebases for machine learning at scale
  • Interest in addressing safety in foundational models

Interested in this job?

Jobs Related To Google DeepMind Research Engineer - AI Safety

Research Engineer

Research Engineer position at Google DeepMind working on applying ML models to improve Alphabet products, focusing on Gemini experiences.

Software Engineer - Trustworthy ML

Software Engineer position at Google DeepMind focusing on trustworthy machine learning, working on strategic projects to enable robust and reliable AI systems.

Research Engineer

Research Engineer position at Google DeepMind focusing on applying machine learning techniques to scientific problems in materials physics and quantum chemistry.

Research Engineer - Sociotechnical Analysis of Model Behaviour (SAMBA)

Research Engineer position at Google DeepMind focusing on sociotechnical analysis of AI model behavior and responsible AI development.

Research Engineer

Research Engineer position at Google DeepMind working on cutting-edge AI applications across Google products, offering competitive salary and benefits.