Help us bring state-of-the-art ML models to the entire Apple ecosystem, enabling the next generation of ML-based experiences in a privacy-preserving way! Our team is responsible for the core framework that launches neural-network workloads on Apple devices. We build the bridge between the compute resources available on Apple hardware and an entire universe of ML models, trained by feature teams throughout Apple and by our developer community.
Your work on our team will enable increasingly sophisticated models throughout our products, from the computer vision models that process every camera frame in the Apple Vision Pro, to the language models that allow human-computer interaction to feel more human. By developing the underlying representation, pipeline, and runtime executor for these workloads, including the mechanisms for mapping them to the CPU, GPU, and Neural Engine, you will play a critical role in expanding what is possible for Apple and for the world.
Key Responsibilities:
This role offers a unique opportunity to work on cutting-edge machine learning technologies at Apple, contributing to the development of on-device ML capabilities that power various Apple products and services.