Google DeepMind is seeking Research Engineers to join their Gemini Safety and AGI Safety & Alignment (ASAT) teams, focusing on building safe and reliable AI systems. This role offers a unique opportunity to work at the forefront of AI safety research, contributing to critical areas such as model alignment, interpretability, and risk mitigation.
The position spans multiple focus areas including pretraining safety interventions, post-training safety improvements, red teaming and adversarial resilience, and image/video generation safety. You'll be working with state-of-the-art AI systems like Gemini, contributing to the Frontier Safety Framework, and implementing crucial safety evaluations and mitigations.
As a Research Engineer, you'll collaborate with world-class researchers and engineers to design and implement approaches that ensure AI systems work as intended. The role requires both technical expertise in machine learning and a strong commitment to AI safety. You'll be working on projects that directly impact the development of safe and socially beneficial AI systems.
The position offers competitive compensation, comprehensive benefits, and the opportunity to work in major tech hubs. You'll be supported with excellent facilities, flexible working options, and a strong emphasis on work-life balance. This is an ideal role for someone passionate about ensuring the safe development of advanced AI systems while working at one of the world's leading AI research organizations.
The role requires at least one year of experience with deep learning or foundation models, strong programming skills, and the ability to understand and implement complex research papers. You'll be working in a fast-paced environment where your work will have immediate impact on production systems while also contributing to longer-term safety goals.