magic.dev
magic.dev2mo ago
New
$225K – $550K/yr

Member of Technical Staff, Inference & RL Systems

San Franciscofull-timelead
OtherMember Of Technical Staff
0 views0 saves0 applied

Quick Summary

Overview

Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone.

Technical Tools
distributed-systemsnetworking

Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.

About the Role

~1 min read

As a Software Engineer on the Inference & RL Systems team, you will design and operate the distributed systems that serve our models in production and power large-scale post-training workflows.

This role sits at the boundary between model execution and distributed infrastructure. You will work on systems that determine inference latency, throughput, stability, and the reliability of RL and post-training training loops.

Magic’s long-context models introduce demanding execution constraints: KV-cache scaling, memory pressure under long sequences, batching trade-offs, long-horizon trajectory rollouts, and sustained throughput under real-world workloads. You will own the infrastructure that makes both production inference and large-scale RL iteration fast and reliable.

  • Design and scale high-performance inference serving systems

  • Optimize KV-cache management, batching strategies, and scheduling

  • Improve throughput and latency for long-context workloads

  • Build and maintain distributed RL and post-training infrastructure

  • Improve reliability of rollout, evaluation, and reward pipelines

  • Automate fault detection and recovery for serving and RL systems

  • Profile and eliminate performance bottlenecks across GPU, networking, and storage layers

  • Collaborate with Kernels and Research to align execution systems with model architecture

  • Strong software engineering and distributed systems fundamentals

  • Experience building or operating large-scale inference or training systems

  • Deep understanding of GPU execution constraints and memory trade-offs

  • Experience debugging performance issues in production ML systems

  • Ability to reason about system-level trade-offs between latency, throughput, and cost

  • Track record of owning critical production infrastructure

What We Offer

~1 min read
Annual salary range: $225K - $550K
Equity is a significant part of total compensation, in addition to salary
401(k) plan with 6% salary matching
Generous health, dental and vision insurance for you and your dependents
Unlimited paid time off
Visa sponsorship and relocation stipend to bring you to SF, if possible
A small, fast-paced, highly focused team
  • Integrity. Words and actions should be aligned

  • Hands-on. At Magic, everyone is building

  • Teamwork. We move as one team, not N individuals

  • Focus. Safely deploy AGI. Everything else is noise

  • Quality. Magic should feel like magic

Location & Eligibility

Where is the job
San Francisco
On-site at the office
Who can apply
Same as job location

Listing Details

Posted
February 28, 2026
First seen
May 8, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
25%
Scored at
May 8, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

magic.devMember of Technical Staff, Inference & RL Systems$225K – $550K