Google is seeking a Silicon Architecture Engineer to join their Technical Infrastructure team, focusing on shaping the future of AI/ML hardware acceleration. This role is crucial in developing and advancing Tensor Processing Unit (TPU) technology that powers Google's AI/ML applications. As a Silicon Architecture Engineer, you'll collaborate with hardware and software architects to design, model, analyze, and define next-generation TPUs.
The position requires a PhD in Electronics and Communication Engineering, Electrical Engineering, Computer Engineering, or a related technical field. You'll be working at the intersection of hardware and machine learning, developing architectural solutions that optimize performance, power consumption, and efficiency. Your responsibilities will span from ML workload characterization to developing architecture specifications for AI/ML roadmaps.
This is an excellent opportunity for someone passionate about hardware architecture and machine learning, offering the chance to work on cutting-edge technology that powers Google's AI infrastructure. You'll be part of the team that keeps Google's networks running optimally, ensuring users have the best and fastest experience possible.
The role involves extensive collaboration with various teams, including hardware design, software, compiler, ML model, and research teams. You'll be responsible for conducting power/performance analysis, developing models, and making crucial architectural decisions that will shape the future of Google's AI hardware capabilities.
Working at Google, you'll be part of a company that values diversity, equality, and inclusion, with a strong commitment to building a representative workforce. The position offers the opportunity to work on challenging problems at scale, with access to world-class resources and the chance to make a significant impact on the future of AI hardware acceleration.