Member of Technical Staff, RL Research & Environments
Quick Summary
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone.
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.
About the Role
~1 min readAs a Software Engineer on the RL Research & Environments team, you will design and operate the data, evaluation, and environment systems that improve model capabilities after pre-training.
This role focuses on post-training: identifying capability gaps, building targeted datasets, designing reward signals, and running iterative training loops that measurably improve user-facing behavior. You will own the infrastructure and experimental workflows that connect product priorities to concrete capability gains.
Magic’s long-context models introduce distinct post-training challenges: long-horizon reasoning, sustained coherence over extended trajectories, context-use quality, and tool-augmented behavior. You will build systems that expose failure modes, generate high-signal training data, and enable rapid RL iteration at scale.
This role can evolve into ownership of major capability areas, deeper RL systems work, or broader influence over post-training strategy as Magic scales long-context model performance and reliability.
Design and build post-training datasets using synthetic generation, targeted data collection, and self-play
Implement filtering, scoring, and mixture strategies for RL and post-training corpora
Build and maintain evaluation frameworks that surface long-context failure modes
Design reward signals and training environments for targeted capability improvements
Run ablations across data sources, reward designs, and long-horizon task structures
Improve reliability and observability of post-training data and environment pipelines
Collaborate closely with Product and Research to translate capability goals into measurable iteration cycles
Strong software engineering fundamentals
Experience building or operating large-scale data or ML systems
Ability to design and interpret experiments that measure model behavior changes
Comfort working at the intersection of ML, data systems, and infrastructure
Strong attention to data quality and evaluation rigor
Track record of owning experimental or production systems end-to-end
What We Offer
~1 min readIntegrity. Words and actions should be aligned
Hands-on. At Magic, everyone is building
Teamwork. We move as one team, not N individuals
Focus. Safely deploy AGI. Everything else is noise
Quality. Magic should feel like magic
Location & Eligibility
Listing Details
- Posted
- November 8, 2024
- First seen
- May 8, 2026
- Last seen
- May 8, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 25%
- Scored at
- May 8, 2026
Signal breakdown
Please let magic.dev know you found this job on Jobera.
4 other jobs at magic.dev
View all →Explore open roles at magic.dev.
Similar Member Of Technical Staff jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.