AI Safety Protections Data Scientist, Trust and Safety

Google is a global technology company that builds innovative products and services used by billions of people worldwide.
Data
Mid-Level Software Engineer
In-Person
5,000+ Employees
2+ years of experience
AI · Cybersecurity
This job posting may no longer be active. You may be interested in these related jobs instead:
Research Data Scientist, Cloud Security

Research Data Scientist position at Google Cloud Security, applying ML and analytics to solve cybersecurity challenges, offering $141K-$202K plus benefits in SF/Sunnyvale.

Business Data Scientist, gTech Ads

Business Data Scientist position at Google's gTech Ads team, focusing on marketing analytics and ML solutions for large clients.

Data Engineer, YouTube Business Technology

Data Engineer position at Google's YouTube division, focusing on building data pipelines and analytics solutions for internal business teams.

Customer Engineer, Data Analytics, Google Cloud (English, Korean)

Customer Engineer position at Google Cloud focusing on data analytics solutions, requiring expertise in cloud architecture and big data technologies, with fluency in English and Korean.

Customer Engineer, Data Analytics, Google Cloud (English, Korean)

Customer Engineer position at Google Cloud focusing on data analytics solutions, requiring expertise in cloud architecture and fluency in English and Korean.

Description For AI Safety Protections Data Scientist, Trust and Safety

Google's Trust & Safety team is seeking a Data Scientist to focus on AI Safety Protections. This role combines data science expertise with critical trust and safety initiatives to protect users across Google's diverse product ecosystem. As part of the team, you'll leverage advanced machine learning and AI techniques to develop scalable safety solutions, analyze protection measures, and drive improvements in user safety.

The position requires strong analytical capabilities, with a focus on identifying trends and generating insights from both quantitative and qualitative data. You'll be responsible for developing automated data pipelines, creating self-service dashboards, and communicating complex findings to various stakeholders, including executive leadership.

The ideal candidate brings at least 2 years of experience in data analysis and project management, with a bachelor's degree or equivalent practical experience. Preferred qualifications include an advanced degree in a quantitative discipline and experience in abuse and fraud disciplines, particularly in web security and content moderation.

Working at Google's Trust & Safety team means being at the forefront of protecting billions of users worldwide. You'll collaborate with engineers and product managers globally to combat abuse and fraud across Google's products like Search, Maps, Gmail, and Google Ads. The role offers the opportunity to make a significant impact on user safety while working with cutting-edge AI and machine learning technologies.

The position involves working with sensitive content and requires strong problem-solving skills in a dynamic environment. You'll be part of a diverse team of analysts, policy specialists, engineers, and program managers, working across 40+ languages to reduce risk and fight abuse. This is an excellent opportunity for someone passionate about data science who wants to contribute to making the internet a safer place while working at one of the world's leading technology companies.

Last updated 3 months ago

Responsibilities For AI Safety Protections Data Scientist, Trust and Safety

  • Develop scalable safety solutions for Artificial Intelligence (AI) products across Google by leveraging advanced machine learning and AI techniques
  • Apply statistical and data science methods to thoroughly examine Google's protection measures, uncover potential shortcomings, and develop insights for subjective security enhancement
  • Deliver business outcomes by crafting data stories for a variety of stakeholders, including executive leadership
  • Develop automated data pipelines and self-service dashboards to provide insights at scale
  • Work with sensitive content or situations and be exposed to graphic, controversial or upsetting topics or content

Requirements For AI Safety Protections Data Scientist, Trust and Safety

Python
Java
  • Bachelor's degree or equivalent practical experience
  • 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data
  • 2 years of experience managing projects and defining project scope, goals, and deliverables
  • Experience in programming languages (Python, R, Julia), database languages (SQL) and scripting languages (C/C++, Python, Java)
  • Experience in applying Machine Learning techniques to datasets
  • Excellent problem-solving and critical thinking skills with attention to detail

Interested in this job?