Google's Trust and Safety team is seeking a Data Scientist to join their efforts in making the internet safer. This role combines data science expertise with responsible AI development, focusing on protecting users across Google's products. The position offers an opportunity to work with cutting-edge technology and large-scale data sets, specifically in the realm of AI safety and trust.
The role requires a strong quantitative background with at least 5 years of experience (or 3 with a PhD) in data analysis, coding, and statistical methods. You'll be working with Python and SQL to analyze large datasets, develop metrics, and optimize content moderation workflows. The position emphasizes both technical expertise and cross-functional collaboration, requiring excellent communication skills.
As part of the Trust and Safety organization, you'll be involved in critical projects related to responsible AI development, testing standards, and red teaming exercises. The role offers competitive compensation ($150,000-$223,000) plus additional benefits including bonus and equity packages.
The ideal candidate will bring together technical expertise in data science and machine learning with a passion for improving user safety and trust in technology. You'll work in Washington D.C., collaborating with diverse teams across Google to develop and implement data-driven solutions for complex challenges in AI safety and content moderation.
This position offers an exciting opportunity to impact Google's products at scale while working on some of the most pressing challenges in responsible AI development. You'll be part of a team that directly influences how Google's products protect and serve users worldwide, making it an ideal role for someone passionate about both technical excellence and ethical technology development.