Google is seeking a Silicon Architecture Engineer to join their Technical Infrastructure team, focusing on shaping the future of AI/ML hardware acceleration. This role is crucial in developing and advancing Tensor Processing Unit (TPU) technology that powers Google's AI/ML applications. The position offers an exciting opportunity to work at the intersection of hardware and machine learning, collaborating with diverse teams to architect, model, and define next-generation TPUs.
As a Silicon Architecture Engineer, you'll be responsible for ML workload characterization, developing architectural models, and creating specifications for future AI/ML computing needs. You'll work closely with hardware design, software, compiler, and research teams to ensure optimal hardware/software interfaces and performance.
The role requires a PhD in a relevant field and strong experience with accelerator architectures and programming. You'll be part of the team that keeps Google's infrastructure running smoothly, ensuring users have the best possible experience. This is an excellent opportunity for someone passionate about hardware architecture and AI/ML technology to make a significant impact on Google's future computing capabilities.
The position offers the chance to work with cutting-edge technology in AI/ML hardware acceleration, collaborating with world-class engineers and researchers. You'll be at the forefront of developing next-generation TPUs, balancing performance, power, features, and cost considerations. Google's commitment to diversity, equality, and inclusion ensures a supportive work environment where innovation and creativity can thrive.