Member of Technical Staff, Inference & RL Systems
Quick Summary
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone.
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.
About the Role
~1 min readAs a Software Engineer on the Inference & RL Systems team, you will design and operate the distributed systems that serve our models in production and power large-scale post-training workflows.
This role sits at the boundary between model execution and distributed infrastructure. You will work on systems that determine inference latency, throughput, stability, and the reliability of RL and post-training training loops.
Magic’s long-context models introduce demanding execution constraints: KV-cache scaling, memory pressure under long sequences, batching trade-offs, long-horizon trajectory rollouts, and sustained throughput under real-world workloads. You will own the infrastructure that makes both production inference and large-scale RL iteration fast and reliable.
Design and scale high-performance inference serving systems
Optimize KV-cache management, batching strategies, and scheduling
Improve throughput and latency for long-context workloads
Build and maintain distributed RL and post-training infrastructure
Improve reliability of rollout, evaluation, and reward pipelines
Automate fault detection and recovery for serving and RL systems
Profile and eliminate performance bottlenecks across GPU, networking, and storage layers
Collaborate with Kernels and Research to align execution systems with model architecture
Strong software engineering and distributed systems fundamentals
Experience building or operating large-scale inference or training systems
Deep understanding of GPU execution constraints and memory trade-offs
Experience debugging performance issues in production ML systems
Ability to reason about system-level trade-offs between latency, throughput, and cost
Track record of owning critical production infrastructure
What We Offer
~1 min readIntegrity. Words and actions should be aligned
Hands-on. At Magic, everyone is building
Teamwork. We move as one team, not N individuals
Focus. Safely deploy AGI. Everything else is noise
Quality. Magic should feel like magic
Location & Eligibility
Listing Details
- Posted
- February 28, 2026
- First seen
- May 8, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 25%
- Scored at
- May 8, 2026
Signal breakdown
Please let magic.dev know you found this job on Jobera.
3 other jobs at magic.dev
View all →Explore open roles at magic.dev.
Similar Member Of Technical Staff jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.