Software Engineer - RL Environments
Quick Summary
About AfterQuery AfterQuery builds the training data and evaluation infrastructure that frontier AI labs use to make their models better. We work with the world's leading labs to design high signal datasets and run rigorous evaluations that go beyond static benchmarks.
As a SWE (Environments), you will design the datasets and evaluation rubrics that directly influence how frontier models learn.
1-4 YOE Major plus if they've worked for/interned for any RL environment companies in the past or any AI safety or benchmarking orgs like METR, Artificial Analysis, etc..
AfterQuery builds the training data and evaluation infrastructure that frontier AI labs use to make their models better. We work with the world's leading labs to design high signal datasets and run rigorous evaluations that go beyond static benchmarks. We are a small, early team (post Series A) where individual contributors have a direct impact on how the next generation of models learn and improve.
As a SWE (Environments), you will design the datasets and evaluation rubrics that directly influence how frontier models learn. You'll work hands-on with research teams at top AI labs, experimenting with data collection strategies, diagnosing model failure modes, and developing the metrics that determine whether a model is actually improving. You'll go from hypothesis to live experiment quickly, and your output will feed directly into model training runs at scale.
Day to day, you will design data slices that expose meaningful failure modes across domains like finance, code, and enterprise workflows. You will build and refine reward signals for RLHF and RLVR pipelines. You will develop quantitative frameworks for measuring dataset quality, diversity, and downstream impact on alignment and capability. You will partner with lab research teams to translate their training objectives into concrete data and evaluation specifications.
Responsibilities
~1 min read- →
Design data slides and explore data shapes that expose meaningful model failure modes across domains like finance, code, and enterprise workflows
- →
Build and refine evaluation rubrics and reward signals for RLHF and RLVR training pipelines
- →
Model annotator behavior and run experiments to improve different model capabilities
- →
Develop quantitative frameworks for measuring dataset quality, diversity, and downstream impact on model alignment and capability
- →
Create and manage both real world & synthetic data pipelines
- →
Partner with lab research teams to translate their training objectives into concrete data and evaluation specifications
1-4 YOE
Major plus if they've worked for/interned for any RL environment companies in the past or any AI safety or benchmarking orgs like METR, Artificial Analysis, etc..
Genuine obsession with how data structure, selection, and quality drive model behavior
Ability to design lightweight experiments, move fast, and extract actionable insights from messy results
Former founders and early engineers at early stage startups are a plus. We don't filter on pedigree. We want people who can demonstrate they work hard, learn fast, and care deeply about getting the details right.
What We Offer
~1 min read$200k base + profit share (around 150% of base) + competitive equity
Location & Eligibility
Listing Details
- Posted
- April 14, 2026
- First seen
- May 6, 2026
- Last seen
- May 7, 2026
Posting Health
- Days active
- 0
- Repost count
- 0
- Trust Level
- 49%
- Scored at
- May 6, 2026
Signal breakdown
Please let afterquery know you found this job on Jobera.
4 other jobs at afterquery
View all →Explore open roles at afterquery.
Similar Software Engineer jobs
View all →Browse Similar Jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.