Google's Trust and Safety team is seeking a Data Scientist to join their efforts in making the internet safer. This role combines data science expertise with responsible AI initiatives, focusing on protecting users across Google's products. The position offers an opportunity to work with cutting-edge technology and large-scale data sets, specifically in the realm of AI safety and trust.
The role requires a strong quantitative background with at least 5 years of experience (or 3 with a PhD) in data analysis, coding, and statistical methods. You'll be working with Python and SQL to analyze large datasets, develop metrics, and optimize content moderation workflows. The position emphasizes both technical expertise and cross-functional collaboration, requiring excellent communication skills.
As part of the Trust and Safety organization, you'll be involved in critical projects related to responsible AI development, testing standards, and red teaming exercises. The role offers competitive compensation ($150,000-$223,000) plus bonus, equity, and comprehensive benefits, reflecting Google's commitment to attracting top talent.
The position is based in Washington D.C., where you'll work with a diverse team of analysts, policy specialists, and technical experts. You'll have the opportunity to influence product development through data-driven insights and help shape the future of AI safety at one of the world's leading technology companies.
This role is ideal for candidates who are passionate about combining technical expertise with ethical technology development, offering the chance to make a significant impact on user trust and safety across Google's global products. The position provides excellent growth opportunities within a supportive and innovative environment, working on meaningful challenges at the intersection of data science and responsible AI.