LinkedIn is the world's largest professional network, built to help members of all backgrounds and experiences achieve more in their careers. Our vision is to create economic opportunity for every member of the global workforce. Every day our members use our products to make connections, discover opportunities, build skills and gain insights. We believe amazing things happen when we work together in an environment where everyone feels a true sense of belonging, and that what matters most in a candidate is having the skills needed to succeed. It inspires us to invest in our talent and support career growth. Join us to challenge yourself with work that matters.
This role will be based in Mountain View, CA, San Francisco, CA or Bellevue, WA. At LinkedIn, we trust each other to do our best work where it works best for us and our teams. This role offers hybrid work options, meaning you can work from home and commute to a LinkedIn office, depending on what's best for you and when your team needs to be together.
As part of LinkedIn's AI Platform group, the AI Training team is responsible for developing and maintaining highly available and scalable deep learning training solutions to power our rapidly growing AI use cases. The team is responsible for scaling LinkedIn's AI model training with hundreds of billions of parameters for all AI use cases from recommendation models, large language models (Generative AI), to computer vision models. We optimize training performance across algorithms, AI frameworks, infrastructure software, and hardware to harness the power of our GPU fleet with thousands of latest GPU cards. The team also works closely with the open source community and has many open source committers (TensorFlow, Horovod, Ray, Hadoop, etc.) in the team. Additionally, this team focussed on technologies like LLMs, GNNs, Incremental Learning, Online Learning, and advanced LLM Agents work for Training infrastructure.
As a Senior Staff Software Engineer on the AI Training Infra team, you will play a crucial role in leading and building the next-gen training infrastructure to power AI use cases. You will design and implement high performance AI Training pipeline, data I/O, work with open source teams to identify and resolve issues in popular libraries like Huggingface, Horovod and PyTorch, debug and optimize deep learning training, and provide advanced support for internal AI teams in areas like model parallelism, data parallelism, Zero, automatic mixed precision and kernel fusion. Finally, you will assist in and guide the development of containerized pipeline orchestration infrastructure, including developing and distributing stable base container images, providing advanced profiling and observability, and updating internally maintained versions of deep learning frameworks and their companion libraries like Tensorflow, PyTorch, DeepSpeed, GNNs, Flash Attention and more.
Responsibilities: