AI Engineer & Researcher - Inference

xAI's mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.
$180,000 - $440,000
Backend
In-Person
AI

Description For AI Engineer & Researcher - Inference

xAI is seeking an AI Engineer & Researcher specializing in Inference for their Bay Area locations. The role focuses on optimizing model inference, building reliable production serving systems, accelerating research on scaling test-time compute, and innovating new ideas to develop AI systems that can accurately understand the universe and generate new knowledge.

The ideal candidate should have experience in:

  • System optimizations for model serving (batching, caching, load balancing, model parallelism)
  • Low-level optimizations for inference (CUDA kernels, code generation)
  • Algorithmic optimizations for inference (quantization, distillation, speculative decoding)

xAI operates with a flat organizational structure, encouraging all employees to be hands-on and contribute directly to the company's mission. The team is small, highly motivated, and focused on engineering excellence. Strong communication skills are essential, as all engineers and researchers are expected to concisely and accurately share knowledge with their teammates.

The interview process consists of an initial application review, a 15-minute phone interview, and four technical interviews: coding assessment, systems hands-on, project deep-dive, and a meet and greet with the wider team.

This is an opportunity to work on cutting-edge AI technology with a team dedicated to pushing the boundaries of what's possible in artificial intelligence.

Last updated 3 months ago

Responsibilities For AI Engineer & Researcher - Inference

  • Optimizing the latency and throughput of model inference
  • Building reliable production serving systems to serve millions of users
  • Accelerating research on scaling test-time compute
  • Innovating new ideas to develop AI systems that can accurately understand the universe and generate new knowledge

Requirements For AI Engineer & Researcher - Inference

Python
Rust
  • Experience in system optimizations for model serving (batching, caching, load balancing, model parallelism)
  • Experience in low-level optimizations for inference (CUDA kernels, code generation)
  • Experience in algorithmic optimizations for inference (quantization, distillation, speculative decoding)
  • Strong communication skills
  • Ability to work across multiple areas of the company
  • Strong work ethic and prioritization skills

Interested in this job?