Google DeepMind is at the forefront of artificial intelligence research, working to advance AI for widespread public benefit and scientific discovery. As a Research Engineer focusing on AI Safety, you'll join a team of scientists, engineers, and ML experts working on making Gemini pre-trained models safer and more powerful. The role combines research innovation with practical implementation, focusing on improving model safety, fairness, and effectiveness.
You'll be working directly with Gemini models, conducting crucial research in pre-training phases to enhance safety features while maintaining model capabilities. This position offers a unique opportunity to impact the development of safe and socially beneficial AI systems, working alongside leading experts in the field.
The ideal candidate brings both technical expertise in machine learning and a commitment to ethical AI development. You'll need strong programming skills, experience with deep learning frameworks, and the ability to work on complex technical challenges. The role offers the chance to contribute to cutting-edge AI safety research while working on one of the most advanced AI systems in the world.
This position is perfect for someone who combines technical excellence with a passion for ensuring AI systems are developed responsibly and ethically. You'll be part of a diverse team committed to creating extraordinary impact, with opportunities to publish research and collaborate on critical challenges in AI safety.