At Dynamo AI, we are at the forefront of developing LLMs with safety, privacy, and real-world responsibility in mind. Our ML team combines academic research culture with industry applications, empowering Fortune 500 companies to adopt frontier research for their next-generation LLM products.
As an ML Research Engineer focusing on LLM Safety, you'll be working on the premier platform for private and personalized LLMs. You'll have the opportunity to democratize state-of-the-art research on safe and responsible AI, free from Big Tech and academic bureaucracy. Your work will have a direct impact on end customers within weeks, not years.
Key responsibilities include owning an LLM vertical with a focus on safety, generating high-quality synthetic data, training LLMs, and conducting rigorous benchmarking. You'll deliver robust, scalable, and reproducible production code while pushing the envelope by developing novel techniques for the world's most harmless and helpful models.
We're looking for candidates with deep domain knowledge in LLM safety techniques, extensive experience in designing and implementing various LLM models, and the ability to adapt quickly to new research findings. If you're passionate about building a platform that empowers fair, unbiased, and responsible development of LLMs without sacrificing user privacy, this role is for you.
Join Dynamo AI, a 2023 CB Insights Top 100 AI Startup, and be part of a team that's shaping the future of safe and responsible AI. Your research will directly empower our customers to deploy safe and responsible LLMs more feasibly, making a significant impact on the industry.
Dynamo AI is committed to fair compensation practices, with salary ranges that reflect experience, expertise, and geographic location. Join us in our mission to democratize AI advancements responsibly and make a lasting impact on the field of machine learning and AI safety.