cognition
cognition7d ago
New

Research, Mid-Training

San Francisco,San Franciscofull-timemid
OtherResearch
0 views0 saves0 applied

Quick Summary

Overview

Who We Are We are an applied AI lab building end-to-end software agents. We're the team behind Devin, the first AI software engineer, and Windsurf, an AI-native IDE.

Technical Tools
pythonpytorchdeep-learningetl

We are an applied AI lab building end-to-end software agents. We're the team behind Devin, the first AI software engineer, and Windsurf, an AI-native IDE. These products represent our vision for AI that doesn't just assist engineers, but works alongside them as a genuine teammate.

Our team is small and talent-dense: world-class competitive programmers, former founders, and researchers from the frontier of AI, including Scale AI, Palantir, Cursor, Google DeepMind, and others.

Mid-training sits at the seam between pre-training and post-training and is one of the highest-leverage points in the entire model pipeline. This is where raw base model capability is sharpened into something that can reason deeply, generalize reliably, and serve as the foundation that post-training builds on.

You will own the late-stage training decisions that determine what our models are fundamentally capable of: data mix and quality uplift, annealing schedules, context length extension, capability injection across coding, math, and reasoning, and the synthetic data strategies that make all of it scale. This role does cross-cutting work across what is classically considered both pre-training and post-training. We don't distinguish between research and engineering; we expect both.

  • Data Mix and Quality Uplift: Design and iterate on high-quality data mixtures for late-stage and annealing training runs. Develop principled methods for sourcing, filtering, and weighting data to sharpen model capabilities without degrading general performance.

  • Capability Injection: Drive targeted improvements in coding, mathematics, and long-horizon reasoning through curated data strategies and training interventions. Translate research insights into measurable capability gains on our agents.

  • Synthetic Data Research: Develop and evaluate synthetic data pipelines that generate training signal at scale. Understand the limits and failure modes of synthetic approaches and build methods that hold up in production training runs.

  • Annealing and Schedule Design: Research and optimize multi-stage learning rate schedules, warmup strategies, and compute allocation across training phases. Understand how schedule choices interact with data distribution and model behavior.

  • Context Length Extension: Research and implement methods for extending effective context length without degrading short-context performance. This includes positional encoding strategies, data construction, and targeted evaluation.

  • Evaluation and Iteration: Build evals that distinguish real capability improvements from benchmark overfitting. Close the loop between training decisions and what actually matters for Devin and our other systems in deployment.

  • Scaling and Methodology: Measure how mid-training interventions scale with compute and data. Develop new approaches when existing methods hit ceilings; we expect both rigorous empiricism and original thinking.

  • Deep familiarity with the LLM training pipeline end to end: pre-training data, optimization, architecture, and how mid-training and post-training interact

  • Hands-on experience with continual pre-training, annealing, or late-stage data mixing for large models

  • Strong intuition for data quality: what makes a dataset useful for training, how to filter and curate at scale, and how data mix choices compound across evals

  • Experience developing or evaluating synthetic data pipelines for capability improvement

  • Proficiency in Python and deep learning frameworks (PyTorch); comfortable debugging distributed training at scale

  • Strong fundamentals in optimization, statistics, and ML theory; able to distinguish real effects from noise, instability, and overfitting

  • A track record of original contributions: publications, open-source impact, or internal results that moved a capability frontier

  • Comfort operating in ambiguous, fast-moving environments where the problem definition is as important as the solution

  • We care more about demonstrated capability than credentials. A PhD is one signal among many.

  • Small, highly selective team where research and product move together; prototypes reach real deployment quickly

  • Compute is not a constraint: large allocations with training jobs routinely running across thousands of GPUs from day one

  • The environment rewards speed, autonomy, and technical depth with minimal process overhead; this is one of the most competitive and fast-moving problems in AI

Cognition is an equal opportunity employer. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic under applicable law. We are committed to providing reasonable accommodations for candidates with disabilities throughout the hiring process - please let us know if you need any.

Location & Eligibility

Where is the job
San Francisco
On-site at the office
Who can apply
Same as job location

Listing Details

Posted
May 1, 2026
First seen
May 6, 2026
Last seen
May 8, 2026

Posting Health

Days active
0
Repost count
0
Trust Level
29%
Scored at
May 6, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

cognitionResearch, Mid-Training