Google is seeking a TPU Architect to join their innovative hardware team, focusing on developing custom silicon solutions that power Google's direct-to-consumer products. This role combines cutting-edge machine learning, hardware architecture, and compiler optimization to push the boundaries of what's possible with Tensor Processing Units (TPUs).
As a TPU Architect, you'll be instrumental in analyzing and optimizing the performance and power efficiency of machine learning workloads on TPU architecture. You'll work with cross-functional teams to develop tools, flows, and dashboards for comprehensive performance analysis, while also proposing architectural improvements based on your findings.
The ideal candidate brings strong expertise in computer architecture, with specific knowledge of microarchitecture, cache hierarchy, pipelining, and memory subsystems. Experience with machine learning accelerators and algorithms is highly valued, as is familiarity with compiler optimization and tools like TensorFlow.
This position offers the opportunity to directly impact the future of Google's hardware experiences, working on technology that serves millions of users worldwide. You'll be part of a team that combines the best of Google AI, Software, and Hardware to create radically helpful experiences, making computing faster, seamless, and more powerful.
Join Google's mission to organize the world's information and make it universally accessible and useful, while working on cutting-edge hardware solutions that push the boundaries of technology. This role offers the chance to work with industry-leading experts and shape the future of machine learning hardware acceleration.