WitnessAI, a leader in innovative networking solutions, is seeking an ML Infrastructure Engineer to drive their machine learning operations forward. This role combines cutting-edge ML infrastructure development with practical implementation of scalable solutions.
The position offers an exciting opportunity to work with state-of-the-art technologies in machine learning infrastructure, focusing on optimizing and scaling ML models in production environments. You'll be responsible for managing GPU resources, building continuous learning pipelines, and implementing advanced inference solutions using platforms like NVIDIA Triton and vLLM.
As an ML Infrastructure Engineer, you'll collaborate with cross-functional teams including applied scientists, software engineers, and DevOps professionals. Your work will directly impact the company's mission by designing and maintaining scalable ML infrastructure components, optimizing workflows, and ensuring high performance of deployed models.
The ideal candidate brings 2+ years of experience in building and scaling ML systems, strong Python skills, and expertise in cloud platforms, particularly AWS. You'll work in a hybrid environment in the San Francisco Bay Area, with comprehensive benefits including health insurance, 401(k), and professional development opportunities.
This role is perfect for someone who combines technical expertise in ML infrastructure with strong problem-solving abilities and excellent communication skills. You'll be at the forefront of implementing and optimizing ML systems while contributing to a company that's pushing the boundaries of networking solutions.