Eventual is revolutionizing the data platform landscape with their innovative solution for data scientists and engineers. Their distributed data engine, Daft, demonstrates impressive scale, running on 800k CPU cores daily - surpassing the world's largest supercomputer. The company is well-funded by notable investors including YCombinator and Caffeinated Capital, with a team comprising experts from Amazon, Databricks, Tesla, and Lyft.
As a Research Engineer focused on AI Pretraining, you'll work at the cutting edge of AI and distributed systems. The role involves implementing advanced dataset and model training techniques, including multimodal learning, synthetic data generation, and reinforcement learning from human feedback (RLHF). You'll collaborate with the Daft data engine team to optimize performance for modern AI workloads.
The position requires expertise in Python and deep learning frameworks, with a strong understanding of transformer architectures and distributed training frameworks. A PhD or equivalent research experience in Machine Learning or Computer Science is preferred. The role offers the opportunity to work with world-class experts in distributed computing and AI research, while building next-generation AI infrastructure.
The company offers a flexible hybrid work environment based in San Francisco, requiring at least 3 days of in-person work. Benefits include competitive compensation, equity, and comprehensive health coverage. The interview process is thorough, including initial calls with co-founders, technical screenings, and team meet-and-greets, ensuring a good mutual fit for this innovative role at the intersection of AI and distributed systems.