Google DeepMind is seeking Research Engineers to join their Gemini Safety and AGI Safety & Alignment (ASAT) teams, focusing on building safe and reliable AI systems. This role offers an opportunity to work at the forefront of AI safety research, implementing crucial safety measures for some of the most advanced AI systems in the world.
The position spans multiple focus areas including pretraining safety interventions, post-training safety improvements, red teaming and adversarial resilience, and image/video generation safety. You'll be working on implementing the Frontier Safety Framework, developing interpretability methods, and conducting critical safety research for AGI systems.
As a Research Engineer, you'll be responsible for designing and implementing approaches to AI alignment, conducting empirical studies on model behavior, and integrating safety measures into production systems. The role requires both technical expertise in machine learning and a strong commitment to AI safety.
The ideal candidate brings at least one year of deep learning experience, strong mathematical and statistical knowledge, and proficiency with major ML frameworks. You'll be working with a diverse team of experts in a collaborative environment, with access to cutting-edge resources and technology.
Benefits include comprehensive healthcare, flexible working arrangements, and strong support for work-life balance. The company offers competitive compensation and relocation support for eligible candidates. This is an opportunity to directly impact the development of safe and beneficial AI systems while working with some of the leading minds in the field.