Are you interested in contributing to the development of the next generation of generative AI applications at Apple? Do you possess a passion for languages and international markets? We are actively expanding our generative AI products globally.
In this role, you will have the opportunity to address innovative challenges in machine learning, particularly focusing on generative models. As a member of the Global Safety team at Apple, you will work in a fast pace environment to assess bias and harm in models and focus on mitigating and identifying solutions tailored specifically to each of our international markets. Our team is currently interested in large generative models for vision and language, with a specific focus on safety, robustness, and uncertainty in models.
We are responsible for ensuring the seamless and safe end-to-end experience across our generative features in global markets, adhering to Apple's global standards. Our team manages a diverse range of responsibilities tailored to our global markets, encompassing international policy making, overseeing red teaming initiatives, conducting general safety evaluations, multilingual guardrail modeling, model alignment and experimentations.
You will be responsible for establishing and defining safety best practices, streamlining evaluations, and working on model alignment, with a specific focus on multilingual models and data generation. Your work will be highly cross-functional, involving collaboration with highly skilled machine learning engineers, software engineers, policy makers, and design teams worldwide to develop and deliver groundbreaking solutions. You'll use your expertise in technology's societal implications to contribute to research projects and improve the safety development ecosystem.