Google is seeking a Silicon Hardware Architecture Modeling Engineer for their TPU (Tensor Processing Unit) team within Google Cloud. This role is crucial in shaping the future of AI/ML hardware acceleration. The position offers an opportunity to drive cutting-edge TPU technology that powers Google's most demanding AI/ML applications. As part of a diverse team, you'll be pushing boundaries and developing custom silicon solutions that power Google's TPU future. You'll work closely with hardware and software architects to model, analyze, and define next-generation Tensor Processing Units.
The Technical Infrastructure team at Google is responsible for building and maintaining the architecture that keeps everything running smoothly. This team develops and maintains data centers and builds next-generation Google platforms, making Google's entire product portfolio possible. The team takes pride in being the engineers' engineers and focuses on keeping networks up and running to ensure users have the best and fastest experience possible.
This role combines hardware architecture, machine learning, and performance optimization, making it perfect for someone passionate about advancing AI hardware technology. You'll be working on critical projects that directly impact Google's AI infrastructure, from workload characterization to architectural modeling and optimization. The position offers the opportunity to collaborate with various teams across hardware design, software, compiler development, and ML research, ensuring a broad exposure to cutting-edge technology development.
The ideal candidate will bring together expertise in computer architecture, performance analysis, and software development, with a particular focus on ML hardware acceleration. This role offers the chance to be at the forefront of AI hardware development, working on technology that powers some of the world's most advanced AI applications.