Google's Trust & Safety team is seeking an AI Safety Data Scientist to join their mission of protecting users across all Google products. This role combines data science expertise with AI safety, focusing on developing and implementing safety measures for Google's AI products. You'll be part of the AI Safety Protections team, working specifically on safeguarding Generative AI experiences.
The position requires strong analytical skills and experience in data analysis, with a focus on identifying trends and generating insights from both quantitative and qualitative data. You'll be working with cutting-edge AI technology while developing scalable safety solutions and automated data pipelines. The role involves collaboration with engineers and product managers globally to fight abuse and fraud at Google's scale.
As an AI Safety Data Scientist, you'll be at the forefront of ensuring the responsible deployment of AI technology, working with sensitive content and making critical decisions that impact user safety. The role offers an opportunity to work with advanced machine learning techniques and contribute to the development of safety classifiers for both server-side and on-device implementations.
Google offers a collaborative environment where you'll work with diverse teams across multiple products like Search, Maps, Gmail, and Google Ads. The position is ideal for someone who combines technical expertise with a passion for user safety and trust. You'll be part of a team that's dedicated to making the internet safer while working with the latest AI technologies.
The role requires a bachelor's degree or equivalent experience, with preferred background in quantitative disciplines. You'll need experience with programming languages like SQL, Python, or R, and familiarity with machine learning techniques. This position offers the opportunity to work on meaningful problems that directly impact user safety while being part of a company that values diversity, equality, and inclusion.