Google is seeking a Silicon Architecture/Design Engineer to shape the future of AI/ML hardware acceleration, specifically focusing on TPU (Tensor Processing Unit) technology. This role is crucial in driving Google's most demanding AI/ML applications. As part of the Technical Infrastructure team, you'll collaborate with hardware and software architects to design next-generation TPUs, balancing performance, power, features, schedule, and cost.
The position involves working on cutting-edge technology that powers Google's AI infrastructure. You'll be responsible for developing architecture specifications, creating performance models, and working closely with various teams to ensure optimal hardware/software integration. The role requires expertise in both hardware architecture and machine learning, making it a unique opportunity to impact the future of AI computing.
The ideal candidate should have a PhD in a relevant field and experience with accelerator architectures and data center workloads. Strong programming skills in C++, Python, and Verilog are essential, along with familiarity with industry-standard tools like Synopsys and Cadence. Knowledge of high-performance computing, low power design techniques, and machine learning architectures is highly valued.
This position offers the opportunity to work with world-class engineers and researchers, contributing to groundbreaking advancements in AI hardware acceleration. You'll be part of Google's Technical Infrastructure team, which is fundamental to keeping Google's vast product portfolio running efficiently. The role combines deep technical expertise with collaborative teamwork, making it ideal for someone passionate about pushing the boundaries of AI hardware development.