LinkedIn, the world's largest professional network, is seeking a Staff AI Scientist to lead cutting-edge AI research and development. This role focuses on advancing LinkedIn's capabilities in large-scale foundation models and AI innovations, particularly in the development and optimization of Large Language Models (LLMs).
The position offers a unique opportunity to work at the intersection of AI research and engineering, where you'll be responsible for developing and training multi-billion parameter transformers and scaling them to serve LinkedIn's global user base. You'll be working with state-of-the-art deep learning architectures and contributing to the next generation of AI models.
As a Staff AI Scientist, you'll collaborate with a world-class team of researchers and engineers, focusing on complex problems such as distributed training, model parallelism, and system co-design. The role requires expertise in large-scale model training, post-training techniques for planning and reasoning, fine-tuning strategies, and reinforcement learning techniques.
The position offers competitive compensation ($164,000 - $268,000) and benefits, including health and wellness programs. The work arrangement is hybrid, combining remote work with office presence in Sunnyvale, CA. This is an excellent opportunity for someone passionate about AI innovation and eager to make an impact at scale.
Key responsibilities include leading research initiatives, publishing findings in top AI venues, maintaining scalable AI pipelines, and mentoring junior engineers. The ideal candidate will have 4+ years of experience in machine learning or AI engineering, strong programming skills in languages like Python and Java, and expertise in distributed training and hardware acceleration.
Join LinkedIn to be part of a team that's pushing the boundaries of AI technology while working in an inclusive environment that values diversity and professional growth. Your work will directly impact how millions of professionals connect and advance their careers worldwide.