Software Engineer, Systems ML - Model Optimization (PhD)

Meta builds technologies that help people connect, find communities, and grow businesses.
$117,000 - $173,000
Machine Learning
Staff Software Engineer
Hybrid
5,000+ Employees
3+ years of experience
AI · AR/VR

Description For Software Engineer, Systems ML - Model Optimization (PhD)

Meta is seeking software engineers to enhance our AI inference infrastructure. As a team member, you'll play a crucial role in improving the latency and power consumption of our AI models and building user-facing APIs for ML engineers. This position requires expertise in both machine learning and software engineering.

Responsibilities include:

  • Fine-tuning, quantizing, and deploying ML models on-device across phones, AR, and VR devices
  • Optimizing models for latency and power consumption
  • Enabling efficient inference on GPUs
  • Building tooling to develop and deploy efficient models for inference
  • Partnering with teams across Meta Reality Labs to optimize key inference workloads

Minimum Qualifications:

  • PhD in Computer Science, Computer Engineering, or equivalent (completed or in progress)
  • Specialized experience in model quantization, compression, on-device inference, GPU inference, PyTorch
  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • Must obtain and maintain work authorization in the country of employment

Preferred Qualifications:

  • Proven record of training, fine-tuning, and optimizing models
  • 3+ years of experience accelerating deep learning models for on-device inference
  • Experience optimizing machine learning model inference on NVIDIA GPUs
  • Familiarity with on-device inference platforms (ARM, Qualcomm DSP)
  • Experience with CUDA/Triton

Meta offers a competitive compensation package, including benefits, and is committed to providing reasonable accommodations for candidates with disabilities or other needs.

Last updated 2 months ago

Responsibilities For Software Engineer, Systems ML - Model Optimization (PhD)

  • Fine tune, quantize and deploy ML models on-device across phones, AR and VR devices
  • Optimize models for latency and power consumption
  • Enable efficient inference on GPUs
  • Build tooling to develop and deploy efficient models for inference
  • Partner with teams across meta reality labs to optimize key inference workloads

Requirements For Software Engineer, Systems ML - Model Optimization (PhD)

Python
  • PhD in Computer Science, Computer Engineering, or equivalent
  • Specialized experience in model quantization, compression, on-device inference, GPU inference, PyTorch
  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • Must obtain work authorization in the country of employment

Benefits For Software Engineer, Systems ML - Model Optimization (PhD)

401k
Medical Insurance
Dental Insurance
Vision Insurance
Equity
  • 401k
  • Medical Insurance
  • Dental Insurance
  • Vision Insurance
  • Equity

Interested in this job?

Jobs Related To Meta Software Engineer, Systems ML - Model Optimization (PhD)

Partner Engineer, Government

Senior technical role implementing Meta's Llama AI models in government projects, requiring extensive experience in AI/ML and government sector collaboration.

Machine Learning Software Engineering Manager

Lead machine learning engineering teams at Meta, combining technical expertise with people management to drive innovation in social technology.

Partner Engineer, Generative AI

Meta is hiring a Partner Engineer for Generative AI to work with strategic partners and cloud providers on LLMs and AI solutions.

Research Scientist

Meta is hiring a Research Scientist to develop optimization algorithms and improve platform performance using machine learning and distributed systems.

Research Scientist - Reality Labs

Meta is hiring a Research Scientist for Reality Labs to work on neuromotor interfaces and machine learning for AR/VR interaction.