Google's Trust & Safety team is seeking an AI Safety Data Scientist to join their mission of protecting users across all Google products. This role combines data science expertise with AI safety, focusing on developing and implementing safety measures for Google's AI products. You'll be part of the AI Safety Protections team, working specifically on safeguarding Generative AI experiences.
The position requires strong analytical skills and experience in data analysis, with a focus on identifying trends and generating insights from both quantitative and qualitative data. You'll be working with cutting-edge AI technology while developing scalable safety solutions and automated data pipelines. The role involves collaboration with engineers and product managers globally to fight abuse and fraud at Google's scale.
As an AI Safety Data Scientist, you'll be at the forefront of ensuring the responsible deployment of AI technology, working with sensitive content and making critical decisions that impact user safety. The role offers the opportunity to work with advanced machine learning techniques and contribute to the development of safety classifiers for both server-side and on-device implementations.
Google offers a collaborative environment where you'll work with diverse teams across multiple products like Search, Maps, Gmail, and Google Ads. The position is ideal for candidates who are passionate about data science, AI safety, and user protection, with the ability to translate complex technical concepts into actionable insights for various stakeholders, including senior leadership.
This role represents a unique opportunity to combine technical expertise with real-world impact, helping to shape the future of AI safety while working at one of the world's leading technology companies. You'll be part of a team that's dedicated to maintaining user trust and ensuring the highest standards of safety across Google's AI products.