At voize, we're revolutionizing the healthcare industry with AI technology that enables nurses to speak their documentation into smartphones, automatically generating correct entries. This innovative solution saves nurses an average of 39 minutes daily while improving documentation quality.
As an ML Ops Engineer, you'll be at the forefront of developing and scaling robust ML infrastructure. Your role involves managing GPU clusters, ML controllers, and cloud/on-prem resources to ensure smooth model training and deployment at scale. You'll be responsible for deploying and managing hosted models for production use cases, assisted labeling, and ML-powered backend products like RAG.
The company is YCombinator-funded and has shown impressive growth, currently serving over 600 senior care homes and growing 100% in the last 90 days. Our impact is significant, helping customers save over 3.5 million hours annually that can be redirected from paperwork to patient care.
Working at voize means joining a dynamic team that combines cutting-edge technology with meaningful social impact. You'll enjoy benefits like virtual stock options, flexible working hours, and remote work options. The company offers professional development through learning platforms, regular team events, and 30 vacation days plus your birthday off.
The ideal candidate will bring several years of experience in deploying ML models in production, working with GPU clusters, and managing ETL & data pipelines. Strong expertise in container orchestration with Kubernetes and Docker is essential, as is proficiency in Infrastructure as Code and GitOps tools like Terraform.
This is an opportunity to make a real difference in healthcare while working with state-of-the-art technology in a rapidly growing startup environment.