Google's Trust & Safety team is seeking an AI Safety Data Scientist to join their mission of protecting users across Google's products. This role combines data science expertise with AI safety, focusing on developing and implementing safety measures for Google's AI products. You'll work with cutting-edge AI technology to create scalable solutions that ensure content safety and policy compliance.
The position requires strong analytical skills and experience in data analysis, with opportunities to work on critical safety features that impact billions of users. You'll be part of the AI Safety Protections team, which is responsible for safeguarding Generative AI experiences across Google products. The role involves developing safety classifiers, analyzing protection measures, and creating automated data pipelines.
As an AI Safety Data Scientist, you'll collaborate with engineers and product managers globally to combat abuse and fraud. The role offers exposure to advanced machine learning techniques and the chance to work on real-world safety challenges. You'll be instrumental in crafting data stories for stakeholders and developing insights that drive business decisions.
The ideal candidate should have a bachelor's degree in a quantitative field, experience with programming languages like Python and SQL, and a background in data analysis. You'll need strong problem-solving abilities and the capability to work in an evolving environment. This position provides an opportunity to make a significant impact on user safety while working with state-of-the-art AI technology at one of the world's leading tech companies.