Meta Reality Labs Research is seeking PhD interns for groundbreaking research in audio signal processing, machine learning, and audio-visual learning for AR/VR applications. The team focuses on creating virtual sounds indistinguishable from reality and redefining human hearing. This 12-24 week internship opportunity involves working on cutting-edge projects in multimodal representation learning, audio-visual scene analysis, egocentric audio-visual learning, multi-sensory speech enhancement, and acoustic activity localization.
The role combines advanced machine learning techniques with audio processing expertise to develop innovative solutions for Meta's AR/VR initiatives. You'll be part of the Meta Reality Labs Research Team, which is developing technologies for breakthrough AR glasses and VR headsets, including optics, displays, computer vision, audio, graphics, and more.
As an intern, you'll collaborate with world-class researchers and engineers, working on projects that directly impact the future of virtual and augmented reality. The position requires strong technical skills in Python, machine learning frameworks, and audio processing, along with research experience demonstrated through publications at top conferences.
This is an exceptional opportunity for PhD candidates interested in pushing the boundaries of audio technology in immersive experiences. You'll work in a collaborative environment, contributing to Meta's mission of creating technology that helps people connect in new ways. The role offers competitive compensation and the chance to work on projects that will shape the future of human-computer interaction.