Taro Logo

AI Safety Data Scientist, Trust and Safety

Google is a global technology company that develops AI-powered products and services.
Data
Entry-Level Software Engineer
In-Person
5,000+ Employees
1+ year of experience
AI
This job posting may no longer be active. You may be interested in these related jobs instead:

Description For AI Safety Data Scientist, Trust and Safety

Google's Trust & Safety team is seeking an AI Safety Data Scientist to join their mission of protecting users across Google's products. This role combines data science expertise with AI safety, focusing on developing and implementing safety measures for Google's AI products. You'll be part of the AI Safety Protections team, working to safeguard Generative AI experiences through advanced machine learning techniques and data analysis.

The position requires strong analytical skills and experience in data science, with opportunities to work on critical safety solutions that impact millions of users. You'll collaborate with engineers and product managers globally to identify and combat abuse and fraud, while ensuring the highest levels of user safety in AI applications.

Key responsibilities include developing scalable safety solutions, analyzing protection measures, and creating automated data pipelines. You'll work with sensitive content and need to handle complex, sometimes challenging topics. The role offers exposure to cutting-edge AI technology while contributing to Google's mission of maintaining user trust and safety.

The ideal candidate should have experience in data analysis, project management, and machine learning, with a background in abuse and fraud disciplines. You'll be working in a dynamic environment that requires excellent problem-solving skills and the ability to communicate complex data insights to various stakeholders.

This position offers the opportunity to work at one of the world's leading technology companies, with access to state-of-the-art resources and the chance to make a significant impact on AI safety. You'll be part of a diverse team of experts working across multiple products and languages, helping to shape the future of safe AI technology.

Last updated 8 months ago

Responsibilities For AI Safety Data Scientist, Trust and Safety

  • Develop scalable safety solutions for AI products across Google by leveraging advanced machine learning and AI techniques
  • Apply statistical and data science methods to examine Google's protection measures, uncover potential shortcomings, and develop insights for continuous security enhancement
  • Drive business outcomes by crafting compelling data stories for a variety of stakeholders, including senior leadership
  • Develop automated data pipelines and self-service dashboards to provide timely insights at scale
  • Work with sensitive content/situations and may be exposed to graphic, controversial, or upsetting topics/content

Requirements For AI Safety Data Scientist, Trust and Safety

Python
  • Bachelor's degree or equivalent practical experience
  • 1 year of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data
  • 1 year of experience managing projects and defining project scope, goals, and deliverables
  • Experience with one or more of the following languages: SQL, R, Python, or C++
  • Experience in abuse and fraud disciplines, focused on web security, harmful content moderation, and threat analysis
  • Experience applying machine learning techniques to datasets
  • Excellent problem-solving and critical thinking skills with attention to detail

Interested in this job?