At Hugging Face, we're leading the AI revolution with a mission to democratize machine learning. The company has built the fastest growing platform for AI builders, serving over 5 million users and 100k organizations who have collectively shared over 1M models, 300k datasets, and 300k apps. Our open-source libraries have garnered more than 400k+ stars on Github.
The ML Optimization team works with top hardware innovators including AWS Inferentia and Trainium, AMD CPUs and Instinct GPUs, Nvidia GPUs, Google TPUs, Intel CPUs, and Habana accelerators. Central to these partnerships is Optimum, our open-source library that connects the Hugging Face ecosystem with specific hardware platforms.
As an intern on the ML Optimization team, you'll contribute to developing cutting-edge solutions for various hardware platforms, working alongside world-class experts. Your role will involve creating an online exporter tool, writing comprehensive deployment guides, designing user flows, conducting hardware optimization experiments, and sharing insights with the community.
We prioritize diversity, equity, and inclusivity, fostering a workplace where everyone feels respected and supported. The company provides excellent development opportunities, including reimbursement for conferences and education. We offer flexible working arrangements with remote options and office spaces in the US, Canada, and Europe.
The ideal candidate should be passionate about open-source development and AI, with a strong interest in hardware optimization. You'll be part of a community that believes in collaborative scientific advancement and supports the broader ML/AI ecosystem. This internship offers a unique opportunity to make a meaningful impact on the AI landscape while learning from industry experts and working with cutting-edge technologies.