Tesla is seeking a Software Compiler Engineer Intern to join their AI Inference team in Palo Alto. This exciting role focuses on developing and optimizing the AI inference stack and compiler that powers neural networks in Tesla vehicles and Optimus. As an intern, you'll work at the intersection of AI, hardware, and software, collaborating with AI Engineers and Hardware Engineers to maximize performance of Tesla's custom hardware.
The position offers a unique opportunity to work on cutting-edge technology with real-world impact, as your work will directly influence the performance of AI models deployed across millions of Tesla vehicles. You'll be working with a state-of-the-art MLIR compiler and runtime architecture, with unprecedented access to hardware features that enable novel compilation approaches for enhanced model performance.
The role requires strong technical skills in C++ and Python, combined with a solid understanding of machine learning concepts. You'll be responsible for developing optimization algorithms, debugging complex parallel systems, and collaborating across teams to improve neural network performance. This internship is perfect for students passionate about AI, compiler optimization, and high-performance computing who want to make a tangible impact on the future of autonomous vehicles and robotics.
Tesla offers comprehensive benefits including medical, dental, and vision coverage with $0 payroll deduction options, 401(k), stock purchase plans, and various other perks. The compensation is highly competitive, ranging from $100,000 to $150,000 annually plus benefits. This is a full-time, on-site position starting around January 2025, requiring a minimum 12-week commitment, with potential extension through Summer 2025.