Research Engineer - AI Safety

A leading AI research company focused on developing safe and socially beneficial artificial intelligence systems.
$114,000 - $245,000
Machine Learning
Mid-Level Software Engineer
In-Person
1,000 - 5,000 Employees
1+ year of experience
AI

Description For Research Engineer - AI Safety

Google DeepMind is seeking Research Engineers to join their Gemini Safety and AGI Safety & Alignment (ASAT) teams, focusing on building safe and reliable AI systems. This role offers a unique opportunity to work at the forefront of AI safety research, contributing to critical areas such as model alignment, interpretability, and risk mitigation.

The position spans multiple focus areas including pretraining safety interventions, post-training safety improvements, red teaming and adversarial resilience, and image/video generation safety. You'll be working with state-of-the-art AI systems like Gemini, contributing to the Frontier Safety Framework, and implementing crucial safety evaluations and mitigations.

As a Research Engineer, you'll collaborate with world-class researchers and engineers to design and implement approaches that ensure AI systems work as intended. The role requires both technical expertise in machine learning and a strong commitment to AI safety. You'll be working on projects that directly impact the development of safe and socially beneficial AI systems.

The position offers competitive compensation, comprehensive benefits, and the opportunity to work in major tech hubs. You'll be supported with excellent facilities, flexible working options, and a strong emphasis on work-life balance. This is an ideal role for someone passionate about ensuring the safe development of advanced AI systems while working at one of the world's leading AI research organizations.

The role requires at least one year of experience with deep learning or foundation models, strong programming skills, and the ability to understand and implement complex research papers. You'll be working in a fast-paced environment where your work will have immediate impact on production systems while also contributing to longer-term safety goals.

Last updated a day ago

Responsibilities For Research Engineer - AI Safety

  • Design, implement, and empirically validate approaches to alignment and risk mitigation
  • Integrate successful approaches into AI systems
  • Conduct empirical studies on model behavior
  • Analyze model performance across different scales
  • Implement safety policies and improve development velocity
  • Design and run evaluations for safety and fairness
  • Work on model interpretability and safety implementations
  • Contribute to the Frontier Safety Framework implementation

Requirements For Research Engineer - AI Safety

Python
  • At least 1 year of experience working with deep learning and/or foundation models
  • Knowledge of mathematics, statistics and machine learning concepts
  • Familiarity with ML/scientific libraries (JAX, TensorFlow, PyTorch, Numpy, Pandas)
  • Experience with distributed computation and large scale system design
  • Understanding of machine learning workflows
  • Ability to understand research papers in the field

Benefits For Research Engineer - AI Safety

Medical Insurance
Dental Insurance
Parental Leave
Relocation Benefits
Visa Sponsorship
  • Enhanced maternity, paternity, adoption, and shared parental leave
  • Private medical and dental insurance for employee and dependents
  • Flexible working options
  • On-site gym
  • Healthy food options
  • Faith rooms
  • Relocation support
  • Immigration support

Interested in this job?

Jobs Related To Google DeepMind Research Engineer - AI Safety

Research Engineer

Research Engineer position at Google DeepMind working on applying ML models to improve Alphabet products, focusing on Gemini experiences.

Research Engineer - AI Safety

Research Engineer position at Google DeepMind focusing on AI safety and improvement of Gemini pre-trained models

Software Engineer - Trustworthy ML

Software Engineer position at Google DeepMind focusing on trustworthy machine learning, working on strategic projects to enable robust and reliable AI systems.

Research Engineer

Research Engineer position at Google DeepMind focusing on applying machine learning techniques to scientific problems in materials physics and quantum chemistry.

Research Engineer - Sociotechnical Analysis of Model Behaviour (SAMBA)

Research Engineer position at Google DeepMind focusing on sociotechnical analysis of AI model behavior and responsible AI development.