At Google DeepMind, we're at the forefront of artificial intelligence research and development. We're seeking a Research Engineer to join our team in creating and executing evaluations for cutting-edge AI systems. In this role, you'll apply your engineering skills to develop and maintain infrastructure for reliable, repeatable evaluations, including datasets, automated and human evaluation systems, and analysis tools. You'll stay current with AI development trends, governance, and sociotechnical research to design new evaluations and communicate results to decision-makers. Collaborating closely with other engineers, research scientists, and experts in AI ethics and policy, you'll play a crucial role in ensuring the safety of our AI systems.
Key responsibilities include: • Designing and developing evaluations to test AI model safety • Developing and maintaining evaluation infrastructure • Running evaluations prior to new AI model releases • Clearly communicating results to decision-makers • Collaborating with experts in AI ethics, policy, and safety
We're looking for candidates with: • A Bachelor's degree in a technical subject or equivalent experience • Strong Python knowledge and experience • Understanding of mathematics, statistics, and machine learning concepts • Ability to present technical results clearly • Deep interest in AI ethics, safety, and policy
Additional advantageous skills include experience with crowd computing, web application development, data analysis tools, and working on multi-stakeholder projects.
At Google DeepMind, we value diversity and are committed to equal employment opportunity. We welcome applications from all backgrounds and accommodate additional needs. Join us in shaping the future of AI with a focus on widespread public benefit, scientific discovery, and the highest priority on safety and ethics.