Google's Trust & Safety team is seeking an AI Safety Data Scientist to join their mission of protecting users across Google's products. This role combines data science expertise with AI safety, focusing on developing and implementing safety measures for Google's AI products. You'll be part of the AI Safety Protections team, working to safeguard Generative AI experiences through advanced machine learning techniques and data analysis.
The position requires strong analytical skills and experience in data science, with opportunities to work on critical safety solutions that impact millions of users. You'll collaborate with engineers and product managers globally to identify and combat abuse and fraud, while ensuring the highest levels of user safety in AI applications.
Key responsibilities include developing scalable safety solutions, analyzing protection measures, and creating automated data pipelines. You'll work with sensitive content and need to handle complex, sometimes challenging topics. The role offers exposure to cutting-edge AI technology while contributing to Google's mission of maintaining user trust and safety.
The ideal candidate should have experience in data analysis, project management, and machine learning, with a background in abuse and fraud disciplines. You'll be working in a dynamic environment that requires excellent problem-solving skills and the ability to communicate complex data insights to various stakeholders.
This position offers the opportunity to work at one of the world's leading technology companies, with access to state-of-the-art resources and the chance to make a significant impact on AI safety. You'll be part of a diverse team of experts working across multiple products and languages, helping to shape the future of safe AI technology.