Google's Trust & Safety team is seeking a Lead Senior Engineering Analyst for their AI Safety Protections team. This role combines data analysis, machine learning, and security to ensure the safety of Google's AI products. The position involves working with cutting-edge AI technologies, including Gemini and other foundational models, to develop and implement safety solutions across Google's product ecosystem.
The role requires expertise in data analysis, project management, and machine learning, with a focus on identifying and preventing abuse and fraud. You'll be working globally with Google engineers and product managers to protect users and partners across products like Search, Maps, Gmail, and Google Ads. The position offers exposure to the latest advancements in AI/LLM technology and the opportunity to impact user safety at scale.
As part of the AI Safety Protections team, you'll be responsible for developing scalable safety solutions, analyzing complex data sets, and implementing AI-powered protective measures. The role requires strong analytical skills, technical expertise in languages like Python, SQL, or C++, and the ability to work with sensitive content while maintaining user trust.
The position offers competitive compensation, including a base salary range of $139,000-$207,000, plus bonus, equity, and comprehensive benefits. This is an excellent opportunity for someone passionate about AI safety, data analysis, and user protection to join one of the world's leading technology companies and make a significant impact on the safety and integrity of AI systems.
Working at Google's Trust & Safety team means being part of a diverse group of experts dedicated to making the internet safer. You'll collaborate with teams across Google, including Google DeepMind, and have the opportunity to work on some of the most challenging problems in AI safety and user protection.