Join Salesforce's Einstein products & platform team, where we're democratizing AI and transforming how our Salesforce Ohana builds trusted machine learning and AI products. Our platform enables the creation, deployment, and management of Generative AI and Predictive AI applications across all clouds, serving over a billion predictions daily and training thousands of models. We're at the forefront of LLM integration, working with both internal and external models to enhance Salesforce use cases.
As a Software Engineer in ML Infrastructure, you'll be instrumental in designing and delivering scalable generative AI services that integrate with numerous applications and serve thousands of tenants. You'll work with cutting-edge technologies in a distributed microservice architecture, utilizing modern containerized deployment stacks and cloud platforms. Your role will involve close collaboration with Product Managers, Architects, Data Scientists, and Deep Learning Researchers to bring innovative technologies to production.
We're looking for someone with strong experience in ML engineering and distributed systems, who can handle the challenges of building and maintaining large-scale AI infrastructure. You'll need expertise in JVM-based languages and Python, along with experience in modern data storage, messaging, and processing frameworks. This role offers the opportunity to work on challenging problems at scale, contributing to a platform that processes billions of predictions daily and manages thousands of models.
Join our team and be part of transforming how AI is integrated into enterprise applications, while working with the latest in machine learning technologies and cloud infrastructure. You'll have the chance to make a significant impact on our platform's evolution and help shape the future of AI in enterprise software.