Tesla is seeking a Staff Software Engineer to join their Autonomy teams, focusing on ML Inference Compiler & Deployment for Optimus, their humanoid robot project. This role is at the intersection of AI, robotics, and autonomous systems, working on one of the world's most advanced and widely deployed AI Platforms.
The position involves deep technical work with Tesla's AI inference stack, which powers both their autonomous vehicles and Optimus robot. You'll be working closely with AI Engineers and Hardware Engineers to optimize neural network performance and efficiency. The role requires expertise in C++ and Python programming, along with significant experience in machine learning compilers and frameworks.
This is a unique opportunity to impact the future of AI deployment at scale, working with cutting-edge MLIR compiler and runtime architecture. You'll have the advantage of working with full hardware control, allowing for novel compilation approaches to enhance model performance. The work is directly connected to production outcomes, with immediate impact on system performance and model deployment capabilities.
The position offers a competitive compensation package ranging from $120,000 to $360,000 annually, plus additional cash and stock awards. Tesla provides comprehensive benefits including medical, dental, and vision coverage, 401(k) matching, and various family-support programs. The role is based in the San Francisco Bay Area, putting you at the heart of Tesla's innovation center.
This role is ideal for someone who wants to work at the cutting edge of AI deployment, enjoys solving complex technical challenges, and wants to contribute to revolutionary advancements in autonomous systems and robotics. You'll be part of a team that's pushing the boundaries of what's possible in AI hardware optimization and deployment at scale.