AWS's Annapurna Labs is seeking a hands-on technical leader for their system-on-chip hardware abstraction layer (SoC HAL) software team. This role sits at the intersection of hardware and software, working on critical infrastructure that powers AWS's machine learning servers (Trainium and Inferentia).
The position involves leading a small team of ~5 developers while maintaining strong technical involvement. You'll work closely with hardware designers and system software teams to build HALs for new SoC IPs, solve architectural challenges, and ensure the reliable operation of AWS's custom silicon infrastructure.
The ideal candidate should be proficient in C++ and Python, with a strong background in low-level software development and hardware systems. While the team works on ML infrastructure, no machine learning expertise is required as the focus is on hardware abstraction and system management.
This role offers the unique opportunity to work at scale with custom silicon that powers AWS's machine learning capabilities. You'll be part of AWS's Utility Computing organization, which drives innovation across AWS's core services. The team culture emphasizes mentorship, knowledge-sharing, and continuous learning.
The position offers competitive compensation ($151,300 - $261,500 base salary) plus equity and benefits, and can be based in either Cupertino, CA or Austin, TX. You'll work in a fast-paced environment alongside thought leaders in multiple technology areas, with the chance to make a significant impact on AWS's machine learning infrastructure.
If you're passionate about building effective abstractions over complex hardware systems, enjoy both technical leadership and people management, and want to work on cutting-edge custom silicon at cloud scale, this role offers an exciting opportunity to shape the future of AWS's machine learning infrastructure.