Google's Trust & Safety team is seeking a Data Scientist to join their mission of making the internet safer. This role combines data science expertise with security and trust initiatives, focusing on protecting users across Google's diverse product ecosystem. You'll be part of a team that fights abuse and fraud, working with cutting-edge AI and ML technologies to develop scalable safety solutions.
The position offers an opportunity to work on high-impact projects that directly affect user safety and trust in Google's products. You'll analyze complex datasets, develop automated solutions, and collaborate with cross-functional teams including engineers and product managers. The role requires both technical expertise in data science and a strong understanding of trust and safety principles.
As an AI Safety Protections Data Scientist, you'll be responsible for identifying trends, generating insights, and developing sophisticated protection measures. You'll work in a dynamic environment where your analysis and recommendations will directly influence Google's safety protocols and user protection strategies.
The ideal candidate combines strong analytical skills with project management experience, bringing both technical expertise and strategic thinking to the role. You'll be part of Google's broader mission to maintain user trust while having the opportunity to work on challenging problems at a global scale.