Google's Trust & Safety team is seeking a Data Scientist to focus on AI Safety Protections. This role combines data science expertise with critical trust and safety initiatives to protect users across Google's diverse product ecosystem. As part of the team, you'll leverage advanced machine learning and AI techniques to develop scalable safety solutions, analyze protection measures, and drive improvements in user safety.
The position requires strong analytical capabilities, with a focus on identifying trends and generating insights from both quantitative and qualitative data. You'll be responsible for developing automated data pipelines, creating self-service dashboards, and communicating complex findings to various stakeholders, including executive leadership.
The ideal candidate brings at least 2 years of experience in data analysis and project management, with a bachelor's degree or equivalent practical experience. Preferred qualifications include an advanced degree in a quantitative discipline and experience in abuse and fraud disciplines, particularly in web security and content moderation.
Working at Google's Trust & Safety team means being at the forefront of protecting billions of users worldwide. You'll collaborate with engineers and product managers globally to combat abuse and fraud across Google's products like Search, Maps, Gmail, and Google Ads. The role offers the opportunity to make a significant impact on user safety while working with cutting-edge AI and machine learning technologies.
The position involves working with sensitive content and requires strong problem-solving skills in a dynamic environment. You'll be part of a diverse team of analysts, policy specialists, engineers, and program managers, working across 40+ languages to reduce risk and fight abuse. This is an excellent opportunity for someone passionate about data science who wants to contribute to making the internet a safer place while working at one of the world's leading technology companies.