magic.dev
magic.dev2mo ago
New
$225K – $550K/yr

Member of Technical Staff, Pre-training Systems

San Franciscofull-timelead
OtherMember Of Technical Staff
0 views0 saves0 applied

Quick Summary

Overview

Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone.

Technical Tools
deep-learningdistributed-systemsnetworking

Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.

About the Role

~1 min read

As a Software Engineer on the Pre-training Systems team, you will design and operate the distributed infrastructure that trains Magic’s long-context models at scale.

This role focuses on large-scale model training across massive GPU clusters. You will work at the boundary between deep learning and distributed systems, ensuring that training runs are performant, reliable, and reproducible under extreme scale.

Magic’s long-context models create non-trivial systems challenges: sustained memory pressure, communication overhead across thousands of devices, long-running jobs that must survive failures, and efficient sequence packing under hardware constraints. You will own the systems that make large-scale pre-training stable and fast.

  • Scale distributed training across large GPU clusters (data, tensor, pipeline parallelism)

  • Optimize communication patterns and gradient synchronization

  • Improve checkpointing, fault tolerance, and job recovery systems

  • Profile and eliminate performance bottlenecks across compute, networking, and storage

  • Improve experiment reproducibility and orchestration workflows

  • Increase hardware utilization and training throughput

  • Collaborate with Kernels and Research to align model architecture with systems realities

  • Strong software engineering and distributed systems fundamentals

  • Experience training large models in multi-node GPU environments

  • Deep understanding of parallelism strategies and performance trade-offs

  • Experience debugging cross-layer issues in production ML systems

  • Strong ownership mindset and ability to operate critical infrastructure

  • Track record of improving performance or reliability of large-scale systems

What We Offer

~1 min read
Annual salary range: $225K - $550K
Equity is a significant part of total compensation, in addition to salary
401(k) plan with 6% salary matching
Generous health, dental and vision insurance for you and your dependents
Unlimited paid time off
Visa sponsorship and relocation stipend to bring you to SF, if possible
A small, fast-paced, highly focused team
  • Integrity. Words and actions should be aligned

  • Hands-on. At Magic, everyone is building

  • Teamwork. We move as one team, not N individuals

  • Focus. Safely deploy AGI. Everything else is noise

  • Quality. Magic should feel like magic

Location & Eligibility

Where is the job
San Francisco
On-site at the office
Who can apply
Same as job location

Listing Details

Posted
February 28, 2026
First seen
May 8, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
25%
Scored at
May 8, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

magic.devMember of Technical Staff, Pre-training Systems$225K – $550K